How to specify Job executor to only use defined process engines? - spring-boot

I am setting up a Spring boot application with Camunda. I want to use a multi-tenant setup as defined in "https://docs.camunda.org/manual/7.5/user-guide/process-engine/multi-tenancy/#one-process-engine-per-tenant"
I have managed to setup multiple process engines via Java (so not with processes.xml, but coded), but there always seems to be a default Process Engine. How do i achieve a setup with only the process engines i defined?
Extra information:
each process engine uses its own datasource, derived via context
I want to avoid the default process engine, because it needs its own datasource. I dont want to setup a datasource/database for a process engine without a tenant. (if i don't setup a datasource for the default, there will be errors thrown by the job executor not getting a connection)
The setup I've tried is in the following block, but for some reason there always is a "default" processengine.
#Autowired
private ConfigurableListableBeanFactory beanFactory;
#Bean
#Order(4)
public void multipleCamunda(){
log.info("Starting Camunda Multitenant");
this.targetDatasources.entrySet().stream().forEach(entry -> {
String tenant = (String) entry.getKey();
DataSource tenantDatasource = (DataSource) entry.getValue();
SpringProcessEngineConfiguration standaloneProcessEngineConfiguration = new SpringProcessEngineConfiguration();
standaloneProcessEngineConfiguration.setDataSource(tenantDatasource);
standaloneProcessEngineConfiguration.setDatabaseSchemaUpdate("true");
standaloneProcessEngineConfiguration.setProcessEngineName(tenant);
DataSourceTransactionManager dataSourceTransactionManager = new DataSourceTransactionManager(fondsDatasource);
standaloneProcessEngineConfiguration.setTransactionManager(dataSourceTransactionManager);
standaloneProcessEngineConfiguration.setHistory(HistoryLevel.HISTORY_LEVEL_FULL.getName());
standaloneProcessEngineConfiguration.setJobExecutorDeploymentAware(true);
// deploy all processes from folder 'processes'
Resource[] resources = new Resource[0];
try {
resources = resourceLoader.getResources("classpath:/bpm/*.bpmn");
} catch (IOException e) {
e.printStackTrace();
}
standaloneProcessEngineConfiguration.setDeploymentResources(resources);
ProcessEngine processEngine = standaloneProcessEngineConfiguration.buildProcessEngine();
RuntimeContainerDelegate.INSTANCE.get().registerProcessEngine(processEngine);
beanFactory.registerSingleton("processEngine" + tenant,processEngine);
log.info("Started process Engine for " + tenant);
});
}
The maven dependencies i use:
<dependency>
<groupId>org.camunda.bpm</groupId>
<artifactId>camunda-engine-cdi</artifactId>
<version>7.10.0</version>
</dependency>
<dependency>
<groupId>org.camunda.bpm.javaee</groupId>
<artifactId>camunda-ejb-client</artifactId>
<version>7.10.0</version>
</dependency>
<dependency>
<groupId>org.camunda.bpm.springboot</groupId>
<artifactId>camunda-bpm-spring-boot-starter-
webapp</artifactId>
<version>3.2.1</version>
</dependency>
I assume that I need to define a #Bean for some kind of ConfigurationBean, but I can't figure out which one and how. Please tell me what configuration bean I need to autowire and how.
** Solution **
In order to stop the default initialization, you need to edit the application.yaml and add
camunda:
bpm:
enabled: false
Setting the same property in a CamdundaBpmProperties somehow doesn't seem to work.
When you have done this, the default startup won't occur and the process engine will start when adding a process engine via the code snippet posted above

Related

Include newly added data sources into route Data Source object without restarting the application server

Implemented Spring's AbstractRoutingDatasource by dynamically determining the actual DataSource based on the current context.
Refered this article : https://www.baeldung.com/spring-abstract-routing-data-source.
Here on spring boot application start up . Created a map of contexts to datasource objects to configure our AbstractRoutingDataSource. All these client context details are fetched from a database table.
#Bean
#DependsOn("dataSource")
#Primary
public DataSource routeDataSource() {
RoutingDataSource routeDataSource = new RoutingDataSource();
DataSource defaultDataSource = (DataSource) applicationContext.getBean("dataSource");
List<EstCredentials> credentials = LocalDataSourcesDetailsLoader.getAllCredentails(defaultDataSource); // fetching from database table
localDataSourceRegistrationBean.registerDataSourceBeans(estCredentials);
routeDataSource.setDefaultTargetDataSource(defaultDataSource);
Map<Object, Object> targetDataSources = new HashMap<>();
for (Credentials credential : credentials) {
targetDataSources.put(credential.getEstCode().toString(),
(DataSource) applicationContext.getBean(credential.getEstCode().toString()));
}
routeDataSource.setTargetDataSources(targetDataSources);
return routeDataSource;
}
The problem is if i add a new client details, I cannot get that in routeDataSource. Obvious reason is that these values are set on start up.
How can I achieve to add new client context and I had to re intialize the routeDataSource object.
Planning to write a service to get all the client context newly added and reset the routeDataSource object, no need to restart the server each time any changes in the client details.
A simple solution to this situation is adding #RefreshScope to the bean definition:
#Bean
#Primary
#RefreshScope
public DataSource routeDataSource() {
RoutingDataSource routeDataSource = new RoutingDataSource();
DataSource defaultDataSource = (DataSource) applicationContext.getBean("dataSource");
List<EstCredentials> credentials = LocalDataSourcesDetailsLoader.getAllCredentails(defaultDataSource); // fetching from database table
localDataSourceRegistrationBean.registerDataSourceBeans(estCredentials);
routeDataSource.setDefaultTargetDataSource(defaultDataSource);
Map<Object, Object> targetDataSources = new HashMap<>();
for (Credentials credential : credentials) {
targetDataSources.put(credential.getEstCode().toString(),
(DataSource) applicationContext.getBean(credential.getEstCode().toString()));
}
routeDataSource.setTargetDataSources(targetDataSources);
return routeDataSource;
}
Add Spring Boot Actuator as a dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
Then trigger the refresh endpoint POST to /actuator/refresh to update the DataSource (actually every refresh scoped bean).
So this will depend on how much you know about the datasources to be added, but you could set this up as a multi-tenant project. Another example of creating new datasources:
#Autowired private Map <String, Datasource> mars2DataSources;
public void addDataSourceAtRuntime() {
DataSourceBuilder dataSourcebuilder = DataSourcebuilder.create(
MultiTenantJPAConfiguration.class.getclassloader())
.driverclassName("org.postgresql.Driver")
.username("postgres")
.password("postgres")
.url("Jdbc: postgresql://localhost:5412/somedb");
mars2DataSources("tenantX", datasourcebuilder.build())
}
Given that you are using Oracle, you could also use its database change notification features.
Think of it as a listener in the JDBC driver that gets notified whenever something changes in your database table. So upon receiving a change, you could reinitialize/add datasources.
You can find a tutorial of how to do this here: https://docs.oracle.com/cd/E11882_01/java.112/e16548/dbchgnf.htm#JJDBC28820
Though, depending on your organization database notifications need some extra firewall settings for the communication to work.
Advantage: You do not need to manually call the REST Endpoint if something changes, (though Marcos Barberios answer is perfectly valid!)

Spring sleuth Baggage key not getting propagated

I've a filter (OncePerRequestFilter) which basically intercepts incoming request and logs traceId, spanId etc. which works well,
this filter lies in a common module which is included in other projects to avoid including spring sleuth dependency in all of my micro-services, the reason why I've created it as a library because any changes to library will be common to all modules.
Now I've to add a new propagation key which need to be propagated to all services via http headers like trace and spanId for that I've extracted current span from HttpTracing and added a baggage key to it (as shown below)
Span span = httpTracing.tracing().tracer().currentSpan();
String corelationId =
StringUtils.isEmpty(request.getHeader(CORELATION_ID))
? "n/a"
: request.getHeader(CORELATION_ID);
ExtraFieldPropagation.set(CUSTOM_TRACE_ID_MDC_KEY_NAME, corelationId);
span.annotate("baggage_set");
span.tag(CUSTOM_TRACE_ID_MDC_KEY_NAME, corelationId);
I've added propagation-keys and whitelisted-mdc-keys to my application.yml (with my library) file like below
spring:
sleuth:
propagation-keys:
- x-corelationId
log:
slf4j:
whitelisted-mdc-keys:
- x-corelationId
After making this change in filter the corelationId is not available when I make a http call to another service with same app, basically keys are not getting propagated.
In your library you can implement ApplicationEnvironmentPreparedEvent listener and add the configuration you need there
Ex:
#Component
public class CustomApplicationListener implements ApplicationListener<ApplicationEvent> {
private static final Logger log = LoggerFactory.getLogger(LagortaApplicationListener.class);
public void onApplicationEvent(ApplicationEvent event) {
if (event instanceof ApplicationEnvironmentPreparedEvent) {
log.debug("Custom ApplicationEnvironmentPreparedEvent Listener");
ApplicationEnvironmentPreparedEvent envEvent = (ApplicationEnvironmentPreparedEvent) event;
ConfigurableEnvironment env = envEvent.getEnvironment();
Properties props = new Properties();
props.put("spring.sleuth.propagation-keys", "x-corelationId");
props.put("log.slf4j.whitelisted-mdc-keys:", "x-corelationId");
env.getPropertySources().addFirst(new PropertiesPropertySource("custom", props));
}
}
}
Then in your microservice you will register this custom listener
public static void main(String[] args) {
ConfigurableApplicationContext context = new SpringApplicationBuilder(MyApplication.class)
.listeners(new CustomApplicationListener()).run();
}
I've gone through documentation and seems like I need to add spring.sleuth.propagation-keys and whitelist them by using spring.sleuth.log.slf4j.whitelisted-mdc-keys
Yes you need to do this
is there another way to add these properties in common module so that I do not need to include them in each and every micro services.
Yes, you can use Spring Cloud Config server and a properties file called application.yml / application.properties that would set those properties for all microservices
The answer from Mahmoud works great when you want register the whitelisted-mdc-keys programatically.
An extra tip when you need these properties also in a test, then you can find the anwser in this post: How to register a ApplicationEnvironmentPreparedEvent in Spring Test

Dynamically generate Application.properties file in spring boot

I have came across a situation where I need to fetch cron expression from database and then schedule it in Spring boot. I am fetching the data using JPA. Now the problem is in spring boot when I use #Scheduled annotation it does not allow me to use the db value directly as it is taken only constant value. So, what I am planning to do is to dynamically generate properties file and read cron expression from properties file. But here also I am facing one problem.The dynamically generated properties file created in target directory.
So I cant use it the time of program loading.
So can anyone assist me to read the dynamically generated file from the resource folder or how to schedule cron expression fetching from DB in spring boot?
If I placed all the details of corn expression in properties file I can schedule the job.
Latest try with dynamically generate properties file.
#Configuration
public class CronConfiguration {
#Autowired
private JobRepository jobRepository;
#Autowired
private ResourceLoader resourceLoader;
#PostConstruct
protected void initialize() {
updateConfiguration();
}
private void updateConfiguration() {
Properties properties = new Properties();
List<Job> morningJobList=new ArrayList<Job>();
List<String> morningJobCornExp=new ArrayList<String>();
// Map<String,String> map=new HashMap<>();
int num=1;
System.out.println("started");
morningJobList= jobRepository.findByDescriptionContaining("Morning Job");
for(Job job:morningJobList) {
//morningJobURL.add(job.getJobUrl());
morningJobCornExp.add(job.getCronExp());
}
for(String cron:morningJobCornExp ) {
properties.setProperty("cron.expression"+num+"=", cron);
num++;
}
Resource propertiesResource = resourceLoader.getResource("classpath:application1.properties");
try (OutputStream out = new BufferedOutputStream(new FileOutputStream(propertiesResource.getFile()))) {
properties.store(out, null);
} catch (Exception ex) {
// Handle error
ex.printStackTrace();
}
}
}
Still it is not able to write in properties file under resource folder.
Consider using Quartz Scheduler framework. It stores scheduler info in DB. No need to implement own DB communication, it is already provided.
Found this example: https://www.callicoder.com/spring-boot-quartz-scheduler-email-scheduling-example/

jdbc connection pool using ThreadpoolExecutor in spring boot

I have an application that runs through multiple databases and for each database runs select query on all tables and dumps it to hadoop.
My design is to create one datasource connection at a time and use the connection pool obtained to run select queries in multiple threads. Once done for this datasource, close the connection and create new one.
Here is the Async code
#Component
public class MySampleService {
private final static Logger LOGGER = Logger
.getLogger(MySampleService.class);
#Async
public Future<String> callAsync(JdbcTemplate template, String query) throws InterruptedException {
try {
jdbcTemplate.query(query);
//process the results
return new AsyncResult<String>("success");
}
catch (Exception ex){
return new AsyncResult<String>("failed");
}
}
Here is the caller
public String taskExecutor() throws InterruptedException, ExecutionException {
Future<String> asyncResult1 = mySampleService.callAsync(jdbcTemplate,query1);
Future<String> asyncResult2 = mySampleService.callAsync(jdbcTemplate,query2);
Future<String> asyncResult3 = mySampleService.callAsync(jdbcTemplate,query3);
Future<String> asyncResult4 = mySampleService.callAsync(jdbcTemplate,query4);
LOGGER.info(asyncResult1.get());
LOGGER.info(asyncResult2.get());
LOGGER.info(asyncResult3.get());
LOGGER.info( asyncResult4.get());
//now all threads finished, close the connection
jdbcTemplate.getConnection().close();
}
I am wondering if this is a right way to do it or do any exiting/optimized solution that out of box I am missing. I can't use spring-data-jpa since my queries are complex.
Thanks
Spring Boot docs:
Production database connections can also be auto-configured using a
pooling DataSource. Here’s the algorithm for choosing a specific
implementation:
We prefer the Tomcat pooling DataSource for its performance and concurrency, so if that is available we always choose it.
Otherwise, if HikariCP is available we will use it.
If neither the Tomcat pooling datasource nor HikariCP are available and if Commons DBCP is available we will use it, but we
don’t recommend it in production.
Lastly, if Commons DBCP2 is available we will use it.
If you use the spring-boot-starter-jdbc or
spring-boot-starter-data-jpa ‘starters’ you will automatically get a
dependency to tomcat-jdbc.
So you should be provided with sensible defaults.

One Tomcat two spring application (war) two seperate logging configurations

As mentioned in the title I have two applications with two different logging configurations. As soon as I use springs logging.file setting I can not seperate the configurations of both apps.
The problem worsens because one app is using logback.xml and one app is using log4j.properties.
I tried to introduce a new configuration parameter in one application where I can set the path to the logback.xml but I am unable to make the new setting work for all logging in the application.
public static void main(String[] args) {
reconfigureLogging();
SpringApplication.run(IndexerApplication.class, args);
}
private static void reconfigureLogging() {
if (System.getProperty("IndexerLogging") != null && !System.getProperty("IndexerLogging").isEmpty()) {
try {
JoranConfigurator configurator = new JoranConfigurator();
configurator.setContext(context);
// Call context.reset() to clear any previous configuration, e.g. default
// configuration. For multi-step configuration, omit calling context.reset().
System.out.println("SETTING: " + System.getProperty("IndexerLogging"));
System.out.println("SETTING: " + System.getProperty("INDEXER_LOG_FILE"));
context.reset();
configurator.doConfigure(System.getProperty("IndexerLogging"));
} catch (JoranException je) {
System.out.println("FEHLER IN CONFIG");
}
logger.info("Entering application.");
}
}
#Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
reconfigureLogging();
return application.sources(applicationClass);
}
The above code works somehow. But the only log entry which is written to the logfile specified in the configuration, which ${IndexerLogging} points to, is the entry from logger.info("Entering application."); :(
I don't really like to attach that code to every class which does some logging in the application.
The application has to be runnable as tomcat deployment but also as spring boot application with integrated tomcat use.
Any idea how I can set the path from ${IndexerLogging} as the path to read the configuration file when first configuring logging in that application?
Take a look at https://github.com/qos-ch/logback-extensions/wiki/Spring you can configure the logback config file to use.

Resources