Race condition when use Kafka and JPA - spring-boot

I have a problem when using microservice and Kafka
for example, I have Service A and Service B they communicate by Kafka and they share the same database inside the database and I have two entities A and B and they share a one-to-many relationship, when I update entity A in service A entity B gets updated/changed as wanted but when I view service B. I can't see the changes that happened in service A.
In my case example code :
here we are in service A:
KafkaService:
public synchronized void getDriverService(Long orderId, Double longitude, Double latitude) {
driverService.getDriver(orderId,longitude,latitude);
driverService.collectionOrder(orderId);
}
driverService:
public void getDriver(Long orderId, Double longitude, Double latitude) {
final Driver [] y={new Driver()};
ascOrderRepository.findById(orderId).ifPresentOrElse(x->{
List<DriverDTO> drivers = findAllCarNearMe(latitude, longitude);
if(drivers.isEmpty())
throwEmptyDriver();
AscOrderDTO orderDto = ascOrderMapper.toDto(x);
int check;
for (DriverDTO dr : drivers) {
check = checkDriver();
if (check < 8) {
log.debug("///////////////////////// driver accept" + dr.getId().toString());
dr.setStatus(UNAVAILABLE);
dr.updateTotalTrip();
Driver driver=driverMapper.toEntity(dr);
driver.addOrders(x);
y[0]=driverRepository.save(driver);
log.debug(dr.toString());
log.debug("/////////////////////////////////////driver accept here /////////////////////////////////////////");
break;
}
}
},this::throwOrder);
}
// find All Car near me
public List<DriverDTO> findAllCarNearMe(Double latitude, Double longitude) {
checkDistance(latitude,longitude);
Point point = createPoint(latitude, longitude);
List<Driver> driver = driverRepository.findNearById(point, 10);
return driverMapper.toDto(driver);
}
public void collectionOrder(Long orderId)
{
ascOrderRepository.findById(orderId).ifPresentOrElse(y->{
if(y.getDriver()!=null) { // here new updated and find this updated into service A
try {
driverProducer.driverCollectionOrder(y.getId());
} catch (Exception e) {
e.printStackTrace();
}
}
else
{
throwDriverNotFind();
}
},this::throwOrder);
}
This is Producer:
#Component public class DriverProducer {
public
DriverProducer(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate; }
public void driverCollectionOrder(Long orderId) throws Exception{ ObjectMapper obj=new ObjectMapper();
kafkaTemplate.send("collecting",obj.writeValueAsString(orderId));
}
Service B:
This is Consumer:
#KafkaListener(topics = "collecting",groupId= groupId)
public void doneOrderStatus(String data) throws NumberFormatException, Exception {
try
{
log.debug("i am in done order status order consumer");
OrderEvent event=OrderEvent.TO_BE_COLLECTED;
orderService.changeStatus(event, Long.parseLong(data));
}
catch (Exception e)
{
throw new Exception(e.getMessage());
}
}
This Method Has my Error:
public void changeStatus(OrderEvent event, Long orderId) throws Exception {
try {
Optional<AscOrder> order=ascOrderRepository.findById(orderId);
if (!order.isPresent()) {
throw new BadRequestAlertException("cannot find Order", "Order entity", "Id invalid");
}
if(order.get().getDriver()!=null) { // cant find Change Here
log.debug("===============================================================================================");
log.debug(order.get().getDriver().toString());
log.debug("===============================================================================================");
}
log.debug("i am in changeStatus ");
stateMachineHandler.stateMachine(event, orderId);
stateMachineHandler.handling(orderId);
} catch (Exception e) {
throw new Exception(e.getMessage());
}
}

The problem may be about the separate ORM sessions held by the services.
To overcome this you may try to reload the entity. To do that,
1- wire the entity manager
#Autowired
EntityManager entityManager;
2- Decorate changeStatus function with #Transactional annotation, unless there is an active transaction already going on.
3- Refresh the order entity
entityManager.refresh(order)

Related

What is recommended way to handle ethereum contract events in spring boot?

What is the appropriate way to handle live events (i.e. service/component should keep on listening to events and save it to offchain db (h2/postgres))
How to close event subscription gracefully?
Implementation tried so far:
#Component
public class ERC20Listener implements Listener {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
/**
* Do something useful with the state update received
*/
#Override
#PostConstruct
public void listen() throws Exception {
Web3j web3j = null;
Disposable flowableEvent = null;
try {
WebSocketService web3jService = new WebSocketService("ws://", true);
web3jService.connect();
web3j = Web3j.build(web3jService);
ERC20Token token= ... //creating contract instance
flowableEvent = token.transferEventFlowable(DefaultBlockParameterName.LATEST, DefaultBlockParameterName.LATEST)
.subscribe(event -> {
try {
System.out.printf("hash=%s from=%s to=%s amount=%s%n",
event.log.getTransactionHash(),
event.from,
event.to,
event.value);
//process event data save to offchain db ==> service call
}catch(Throwable e) {
e.printStackTrace();
}
});
} catch (Exception e) {
logger.error("Unknown Exception " + e.getMessage());
throw new Exception(e.getMessage());
} finally {
web3j.shutdown();
flowableEvent.dispose();
}
}
}

How to fix 'No way to dispatch this command to Redis Cluster because keys have different slots' in Spring

I need to use Redis Cluster in Spring. But I'm getting the following error when I use mget or del on a list of keys: 'No way to dispatch this command to Redis Cluster because keys have different slots'. Showing a part of my Component code using JedisCluster.
It works when I use single key operations but not with multiple keys.
/* Component Code */
public class RedisServiceManager {
#Value("${redis.hosts}")
String hosts;
#Autowired
JedisPoolConfig jedisPoolConfig;
private JedisCluster jedisCluster;
#PostConstruct
private void init() {
List<String> redisHosts = Arrays.asList(hosts.split(","));
Set<HostAndPort> jedisClusterNode = new HashSet<HostAndPort>();
redisHosts.forEach(redisHost -> {
jedisClusterNode.add(new HostAndPort(redisHost, 6379));
});
jedisCluster = new JedisCluster(jedisClusterNode, jedisPoolConfig);
}
// This works
public String getValueForKey(String key) {
try {
return jedisCluster.get(key);
} catch (Exception e) {
return null;
}
}
// This works
public void delKey(String cacheKey) {
try {
jedisCluster.del(cacheKey);
} catch (Exception e) {
}
}
// This doesn't work
public List<String> getValuesForAllKeys(String... keys) {
try {
return jedisCluster.mget(keys);
} catch (Exception e) {
return new ArrayList<>();
}
}
// This doesn't work
public void delAllKeys(String... keys) {
try {
jedisCluster.del(keys);
} catch (Exception e) {
}
}
}
Can someone help with this?
This is not a bug or an issue, but is the way how redis cluster works. You can find more details in the cluster documentation. But don't worry: there is a "trick": you can use hash as described here

transactional unit testing with ObjectifyService - no rollback happening

We are trying to use google cloud datastore in our project and trying to use objectify as the ORM since google recommends it. I have carefully used and tried everything i could read about and think of but somehow the transactions don't seem to work. Following is my code and setup.
#RunWith(SpringRunner.class)
#EnableAspectJAutoProxy(proxyTargetClass = true)
#ContextConfiguration(classes = { CoreTestConfiguration.class })
public class TestObjectifyTransactionAspect {
private final LocalServiceTestHelper helper = new LocalServiceTestHelper(
// Our tests assume strong consistency
new LocalDatastoreServiceTestConfig().setApplyAllHighRepJobPolicy(),
new LocalMemcacheServiceTestConfig(), new LocalTaskQueueTestConfig());
private Closeable closeableSession;
#Autowired
private DummyService dummyService;
#BeforeClass
public static void setUpBeforeClass() {
// Reset the Factory so that all translators work properly.
ObjectifyService.setFactory(new ObjectifyFactory());
}
/**
* #throws java.lang.Exception
*/
#Before
public void setUp() throws Exception {
System.setProperty("DATASTORE_EMULATOR_HOST", "localhost:8081");
ObjectifyService.register(UserEntity.class);
this.closeableSession = ObjectifyService.begin();
this.helper.setUp();
}
/**
* #throws java.lang.Exception
*/
#After
public void tearDown() throws Exception {
AsyncCacheFilter.complete();
this.closeableSession.close();
this.helper.tearDown();
}
#Test
public void testTransactionMutationRollback() {
// save initial list of users
List<UserEntity> users = new ArrayList<UserEntity>();
for (int i = 1; i <= 10; i++) {
UserEntity user = new UserEntity();
user.setAge(i);
user.setUsername("username_" + i);
users.add(user);
}
ObjectifyService.ofy().save().entities(users).now();
try {
dummyService.mutateDataWithException("username_1", 6L);
} catch (Exception e) {
e.printStackTrace();
}
List<UserEntity> users2 = this.dummyService.findAllUsers();
Assert.assertEquals("Size mismatch on rollback", users2.size(), 10);
boolean foundUserIdSix = false;
for (UserEntity userEntity : users2) {
if (userEntity.getUserId() == 1) {
Assert.assertEquals("Username update failed in transactional context rollback.", "username_1",
userEntity.getUsername());
}
if (userEntity.getUserId() == 6) {
foundUserIdSix = true;
}
}
if (!foundUserIdSix) {
Assert.fail("Deleted user with userId 6 but it is not rolledback.");
}
}
}
Since I am using spring, idea is to use an aspect with a custom annotation to weave objectify.transact around the spring service beans methods that are calling my daos.
But somehow the update due to ObjectifyService.ofy().save().entities(users).now(); is not gettign rollbacked though the exception throws causes Objectify to run its rollback code. I tried printing the ObjectifyImpl instance hashcodes and they are all same but still its not rollbacking.
Can someone help me understand what am i doing wrong? Havent tried the actual web based setup yet...if it cant pass transnational test cases there is no point in actual transaction usage in a web request scenario.
Update: Adding aspect, services, dao as well to make a complete picture. The code uses spring boot.
DAO class. Note i am not using any transactions here because as per code of com.googlecode.objectify.impl.TransactorNo.transactOnce(ObjectifyImpl<O>, Work<R>) a transnational ObjectifyImpl is flushed and committed in this method which i don't want. I want commit to happen once and rest all to join in on that transaction. Basically this is the wrong code in com.googlecode.objectify.impl.TransactorNo ..... i will try to explain my understanding a later in the question.
#Component
public class DummyDaoImpl implements DummyDao {
#Override
public List<UserEntity> loadAll() {
Query<UserEntity> query = ObjectifyService.ofy().transactionless().load().type(UserEntity.class);
return query.list();
}
#Override
public List<UserEntity> findByUserId(Long userId) {
Query<UserEntity> query = ObjectifyService.ofy().transactionless().load().type(UserEntity.class);
//query = query.filterKey(Key.create(UserEntity.class, userId));
return query.list();
}
#Override
public List<UserEntity> findByUsername(String username) {
return ObjectifyService.ofy().transactionless().load().type(UserEntity.class).filter("username", username).list();
}
#Override
public void update(UserEntity userEntity) {
ObjectifyService.ofy().save().entity(userEntity);
}
#Override
public void update(Iterable<UserEntity> userEntities) {
ObjectifyService.ofy().save().entities(userEntities);
}
#Override
public void delete(Long userId) {
ObjectifyService.ofy().delete().key(Key.create(UserEntity.class, userId));
}
}
Below is the Service class
#Service
public class DummyServiceImpl implements DummyService {
private static final Logger LOGGER = LoggerFactory.getLogger(DummyServiceImpl.class);
#Autowired
private DummyDao dummyDao;
public void saveDummydata() {
List<UserEntity> users = new ArrayList<UserEntity>();
for (int i = 1; i <= 10; i++) {
UserEntity user = new UserEntity();
user.setAge(i);
user.setUsername("username_" + i);
users.add(user);
}
this.dummyDao.update(users);
}
/* (non-Javadoc)
* #see com.bbb.core.objectify.test.services.DummyService#mutateDataWithException(java.lang.String, java.lang.Long)
*/
#Override
#ObjectifyTransactional
public void mutateDataWithException(String usernameToMutate, Long userIdToDelete) throws Exception {
//update one
LOGGER.info("Attempting to update UserEntity with username={}", "username_1");
List<UserEntity> mutatedUsersList = new ArrayList<UserEntity>();
List<UserEntity> users = dummyDao.findByUsername(usernameToMutate);
for (UserEntity userEntity : users) {
userEntity.setUsername(userEntity.getUsername() + "_updated");
mutatedUsersList.add(userEntity);
}
dummyDao.update(mutatedUsersList);
//delete another
UserEntity user = dummyDao.findByUserId(userIdToDelete).get(0);
LOGGER.info("Attempting to delete UserEntity with userId={}", user.getUserId());
dummyDao.delete(user.getUserId());
throw new RuntimeException("Dummy Exception");
}
/* (non-Javadoc)
* #see com.bbb.core.objectify.test.services.DummyService#findAllUsers()
*/
#Override
public List<UserEntity> findAllUsers() {
return dummyDao.loadAll();
}
Aspect which wraps the method annoted with ObjectifyTransactional as a transact work.
#Aspect
#Component
public class ObjectifyTransactionAspect {
private static final Logger LOGGER = LoggerFactory.getLogger(ObjectifyTransactionAspect.class);
#Around(value = "execution(* *(..)) && #annotation(objectifyTransactional)")
public Object objectifyTransactAdvise(final ProceedingJoinPoint pjp, ObjectifyTransactional objectifyTransactional) throws Throwable {
try {
Object result = null;
Work<Object> work = new Work<Object>() {
#Override
public Object run() {
try {
return pjp.proceed();
} catch (Throwable throwable) {
throw new ObjectifyTransactionExceptionWrapper(throwable);
}
}
};
switch (objectifyTransactional.propagation()) {
case REQUIRES_NEW:
int limitTries = objectifyTransactional.limitTries();
if(limitTries <= 0) {
Exception illegalStateException = new IllegalStateException("limitTries must be more than 0.");
throw new ObjectifyTransactionExceptionWrapper(illegalStateException);
} else {
if(limitTries == Integer.MAX_VALUE) {
result = ObjectifyService.ofy().transactNew(work);
} else {
result = ObjectifyService.ofy().transactNew(limitTries, work);
}
}
break;
case NOT_SUPPORTED :
case NEVER :
case MANDATORY :
result = ObjectifyService.ofy().execute(objectifyTransactional.propagation(), work);
break;
case REQUIRED :
case SUPPORTS :
ObjectifyService.ofy().transact(work);
break;
default:
break;
}
return result;
} catch (ObjectifyTransactionExceptionWrapper e) {
String packageName = pjp.getSignature().getDeclaringTypeName();
String methodName = pjp.getSignature().getName();
LOGGER.error("An exception occured while executing [{}.{}] in a transactional context."
, packageName, methodName, e);
throw e.getCause();
} catch (Throwable ex) {
String packageName = pjp.getSignature().getDeclaringTypeName();
String methodName = pjp.getSignature().getName();
String fullyQualifiedmethodName = packageName + "." + methodName;
throw new RuntimeException("Unexpected exception while executing ["
+ fullyQualifiedmethodName + "] in a transactional context.", ex);
}
}
}
Now the problem code part that i see is as follows in com.googlecode.objectify.impl.TransactorNo:
#Override
public <R> R transact(ObjectifyImpl<O> parent, Work<R> work) {
return this.transactNew(parent, Integer.MAX_VALUE, work);
}
#Override
public <R> R transactNew(ObjectifyImpl<O> parent, int limitTries, Work<R> work) {
Preconditions.checkArgument(limitTries >= 1);
while (true) {
try {
return transactOnce(parent, work);
} catch (ConcurrentModificationException ex) {
if (--limitTries > 0) {
if (log.isLoggable(Level.WARNING))
log.warning("Optimistic concurrency failure for " + work + " (retrying): " + ex);
if (log.isLoggable(Level.FINEST))
log.log(Level.FINEST, "Details of optimistic concurrency failure", ex);
} else {
throw ex;
}
}
}
}
private <R> R transactOnce(ObjectifyImpl<O> parent, Work<R> work) {
ObjectifyImpl<O> txnOfy = startTransaction(parent);
ObjectifyService.push(txnOfy);
boolean committedSuccessfully = false;
try {
R result = work.run();
txnOfy.flush();
txnOfy.getTransaction().commit();
committedSuccessfully = true;
return result;
}
finally
{
if (txnOfy.getTransaction().isActive()) {
try {
txnOfy.getTransaction().rollback();
} catch (RuntimeException ex) {
log.log(Level.SEVERE, "Rollback failed, suppressing error", ex);
}
}
ObjectifyService.pop();
if (committedSuccessfully) {
txnOfy.getTransaction().runCommitListeners();
}
}
}
transactOnce is by code / design always using a single transaction to do things. It will either commit or rollback the transaction. there is no provision to chain transactions like a normal enterprise app would want.... service -> calls multiple dao methods in a single transaction and commits or rollbacks depending on how things look.
keeping this in mind, i removed all annotations and transact method calls in my dao methods so that they don't start an explicit transaction and the aspect in service wraps the service method in transact and ultimately in transactOnce...so basically the service method is running in a transaction and no new transaction is getting fired again. This is a very basic scenario, in actual production apps services can call other service methods and they might have the annotation on them and we could still end up in a chained transaction..but anyway...that is a different problem to solve....
I know NoSQLs dont support write consistency at table or inter table levels so am I asking too much from google cloud datastore?

Choose Class in Birt is empty eventhough I have added jar in Datasource

Even though while creating dataset choose class window is empty. I am using Luna Service Release 2 (4.4.2).
From: http://yaragalla.blogspot.com/2013/10/using-pojo-datasource-in-birt-43.html
In the dataset class the three methods, “public void open(Object obj, Map map)”, “public Object next()” and “public void close()” must be implemented.
Make sure you have implemented these.
Here is a sample that I tested with:
public class UserDataSet {
public Iterator<User> itr;
public List<User> getUsers() throws ParseException {
List<User> users = new ArrayList<>();
// Add to Users
....
return users;
}
public void open(Object obj, Map<String, Object> map) {
try {
itr = getUsers().iterator();
} catch (ParseException e) {
e.printStackTrace();
}
}
public Object next() {
if (itr.hasNext())
return itr.next();
return null;
}
public void close() {
}
}

How to Delete via DbSet in EntityFramework

Hi I'm trying to write a generic repository for delete operation , this is my Repository
public class Repository<T> : IRepository<T> where T : class, IAggregateRoot
{
private readonly DbSet<T> _entitySet;
private readonly StatosContext _statosContext;
public Repository(StatosContext statosContext)
{
_statosContext = statosContext;
_entitySet = statosContext.Set<T>();
}
public void Add(T entity)
{
_entitySet.Add(entity);
}
public void Delete(T entity)
{
_entitySet.Remove(entity);
}
}
when I call Delete via a service method like this
public void RemoveContact(ContactViewModel contactViewModel)
{
var categoryView = new ContactViewModel { ContactId = contactViewModel.ContactId };
var contact = categoryView.ConvertToContactModel();
_contactRepository.Delete(contact);
_contactRepository.SaveChanges();
}
it Doesn't work because it doesn't find the entity
how can I write Delete method in mt Generic repository ??
The issue is that your entity isnt attached yet.
Heres my generic repository, take a look how I do this
public void RemoveOnSave(T entity)
{
try
{
var e = m_Context.Entry(entity);
if (e.State == EntityState.Detached)
{
m_Context.Set<T>().Attach(entity);
e = m_Context.Entry(entity);
}
e.State = EntityState.Deleted;
}
catch (InvalidOperationException ex)
{
throw new RepositoryTrackingException(
"An attempt was made to delete an entity you are already modifying, this may happen if you are trying to update using the same repository instance in two place", ex);
}
}
https://github.com/lukemcgregor/StaticVoid.Repository/blob/master/StaticVoid.Repository.EntityFramework/DbContextRepositoryDataSource.cs
if you're working with disconnected entities and you're sure that the entity is not tracked by the context (you should), you can write this simple code.
public void Delete(T entity)
{
try
{
_entitySet.Attach(entity);
_entitySet.Remove(entity);
_statosContext.SaveChanges();
}
catch (OptimisticConcurrencyException e)
{
_statosContext.Refresh(RefreshMode.ClientWins,entity);
}
}
RefreshMode has 2 possible values: ClientWins and StoreWins. What value to choose depends on your strategy. Here I assume that you're implementing "Last Record Wins" strategy

Resources