SpringBoot Multi-tenant app with scheduled tasks - spring

I've a multi-tenant app where the our customers submit their orders (JSON payload), which has to be processed offline. We're using database per tenant strategy and have configuration working all fine. However for asynchronous processing like in this example, we're stuck. When payload is submitted by the customer, it's saved into a table. We want to run a scheduled task, that can read this table and process the orders in their.
We tried something like:
#Scheduled(fixedRate = 60000)
public void doSomething() {
TenantDataSource tenantDataSource = context.getBean(TenantDataSource.class);
Set<String> tenants = tenantDataSource.getAll().keySet();
tenants.forEach(tenant -> {
MDC.setContext(new CallContext().setTenantId(tenant));
context.getBean(JobService.class).listJobs();
});
}
But this still looks for the jobs in the master database and not in the tenant specific database.
Any pointers?

Make sure the doSomething() method isn't transactional as transactional would create its own interceptor rather than looking into your resolver.

Related

What is the Realtime uses of #transactional annotation in SpringBoot?

I don not have any clear concept why we use #transactional annotation. My need a realtime concept for use this.
Please give me clear concept with example on this.
A transaction is a series of operations which either all occur or nothing occurs. In database systems you can therefore start a transaction, then execute a series of queries and then either commit the transaction if all the queries were successful or rollback if you had an error. During the transaction all the changes are only visible for you session and in case of a rollback all the changes you have made will be undone.
As a real world example you could take a money transfer from one account to another. There is one query to withdraw from an account and another query to credit the money to the other account. If you would execute those queries without a transaction and the second query fails for whatever reason, the money would have been withdrawn from one account but not credited to another, which means you would have eliminated money.
Spring built an abstraction layer for that with the #Transactional annotation, which starts a transaction before you method is called and commits it if all was fine or does a rollback when an exception was thrown.
public class MoneyTransferService {
#Autowired
private JdbcTemplate jdbcTemplate;
#Transactional
public void transferMoney(Double amount, Integer debitAccount, Integer creditAccount) {
jdbcTemplate.update("UPDATE accounts SET balance = balance - ? where accountId = ?", amount, debitAccount);
jdbcTemplate.update("UPDATE accounts SET balance = balance + ? where accountId = ?", amount, creditAccount);
}
}

Should I use ALLOW FILTERING in Cassandra to delete associated entities in a multi-tenant app?

I have a spring-boot project and I am using Cassandra as database. My application is a tenant based application and all my tables include the tenantId. It is always part of the partition key of all tables but I have also other columns which are part of the partition keys.
So, the problem is; I want to remove a specific tenant from my database but I can't do it directly. Because I need the other part of the partition key.
I have two solutions for it in mind.
I will allow filtering and select all the tenant specific entities and then remove them one by one in the application.
I will use the findAll() method and fetch all the data and then filter in the application and delete all the tenant specific data.
Example:
public class DeleteTenant{
#Autowired MyRepository myRepo;
public void cleanTenantWithoutDbFiltering(String tenantId){
myRepo.findAll()
.stream()
.filter(entity -> entity.getTenantId().equals(tenantId)) // ??
.forEach(MyRepository::remove);
}
public void cleanTenantWithDbFiltering(String tenantId){
myRepo.getTenantSpecificData(tenantId)
.forEach(MyRepository::remove);
}
}
My getTenantSpecificData(String tenantId) query would look like that:
#AllowFiltering
#Query("Select * from myTable where tenantId = ?1 ALLOW FILTERING")
public List<MyEntity> getTenantSpecificData(String tenantId);
Do you have any other idea about it? If not which one do you think would be more efficient? Filtering in the application itself or in the cassandra.
Thanks in advance for your answers!
It isn't clear to me how you've modelled your data because you haven't provided examples of your schema but in any case, the use of ALLOW FILTERING is never going to be a good idea because it means that your query has to do a full table scan of all the relevant tables unless the tenant ID is the partition key.
You will need to come up with a different approach such as writing a Spark app that will efficiently go through the tables to identify partitions/rows to delete. Cheers!

How to assign quartz job to user and category?

I'm implementing web app using spring boot and quartz to allow registered users schedule notifications. These notifications are divided into two categories: sms and email. I would like to make possibility to choose category sms or email and then list all scheduled notifications by category. User can edit already scheduled notifications, add new ones and remove choosen ones. Task seems to be very simple, but I don't know how to assign job to user and category, since when I create new job there is possibility to identify jobs by job id and group name only. See the following code snipped:
private JobDetail createJob(ScheduleEmailDto scheduleEmailDto) {
JobDataMap jobDataMap = new JobDataMap();
jobDataMap.put("email", scheduleEmailDto.getEmail());
jobDataMap.put("subject", scheduleEmailDto.getSubject());
jobDataMap.put("body", scheduleEmailDto.getBody());
Integer userId = scheduleEmailDto.getUserId();
Integer categoryId = scheduleEmailDto.getCategoryId();
JobDetail newJob = JobBuilder.newJob(EmailJob.class)
.withIdentity(UUID.randomUUID().toString(), "group")
.usingJobData(jobDataMap)
.storeDurably()
.build();
return newJob;
}
Does anyone is able to point me how can I assign newly created job to user and category? All suggestions would be very helpful.
If you don't already, you can configure a JDBCJobStore, which allows to save job_details, triggers and so on in a few database tables and handle them from DB.
See http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/tutorial-lesson-09.html for more details.
At that point it could be easy for you to make an additional table which can link user and category to the jobs and manage them via DB.

Group send kafka message and DB update in one Transaction in SpringBoot

I need to perform several operations in one transaction
produce kafka message
update Table A
update Table B
I'm fine with sending message and don't update both tables (A and B). I'm not ok to produce message and update one of tables.
I'm trying to achieve my goal using #Transactional annotation
import org.springframework.transaction.annotation.Isolation;
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;
#Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.SERIALIZABLE)
public void handle(Event approvalEvent) {
var entity = entityService.getLatestVersion(approvalEvent.getTransactionId());
entityService.approve(entity.getTransactionId());
logService.logApproval(entity);
producer.send(approvalEvent);
}
do I do it right?
The problem with the approach above that you are interacting with two distinct systems (Database and Message queue) in one transaction. The combinations of scenarios to handle when operation on one system is successful and fails in other system makes the solution complex.
There is pattern in the microservices world to handle the exact same scenario. It is called outbox pattern.
You can read more about it here.
The short summary is you have an additional table in your database called outbox that contains the messages that are to be published to the message queue.
In the DB transaction for adding\updating the entity you insert a row in the outbox table tool containing the details of operation on the entity.
Then you asynchronously read the rows from outbox table and publish to message queue either via poling or using change data capture. See a sample implementation here using debezium.
Your transaction code would look like this.
#Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.SERIALIZABLE)
public void handle(Event approvalEvent) {
var entity = entityService.getLatestVersion(approvalEvent.getTransactionId());
entityService.approve(entity.getTransactionId());
logService.logApproval(entity);
//Outbox is the table containing the records to be published to MQ
outboxRepo.save(approvalEvent);
}

Spring boot #Transactioanl method running on multiple threads

In my spring boot application, I have parallel running multiple threads of following #Transactioanl method.
#Transactional
public void run(Customer customer) {
Customer customer = this.clientCustomerService.findByCustomerName(customer.getname());
if(customer == null) {
this.clientCustomerService.save(customer);
}
// another database oparations
}
When this running on multiple threads at the same time, since customer object will not be save until end of the transaction block, is there any possibility to duplicate customers in the database?
If your customer has an #Idfield which define a Primary Key column in Customer database, the database will throw you an exception like javax.persistence.EntityExistsException. Even if you run your code on multiple threads, at a point in time, maybe at the database level, only one will acquire a lock on the new inserted row. Also you must define #Version column/field at top entity level in order to use optimistic-locking. More details about this you can find here.

Resources