Dynamically configuring spring state machine - spring-statemachine

Some queries on spring state machine.
Can we have more than one state machine in a single spring project,
where in one state machine serves for one work flow (may be a CD
player work flow) and the other for a turnstile?
Can I dynamically load the configuration in my config class, for instance from a big data source having JSON formatted data, where we stores our states, events, transitions etc.
One of my requirement is I may be having a frequently changing worklow or model, which I needs to configured in my spring project. How can I effectively do that with spring state machine.

1) You can have multiple machines. #EnableStateMachine has id property for a bean name. You can expose config as #EnableStateMachineFactory. If you want to work outside of javaconfig there is a manual builder model for it.
2/3) There is a public configuration api between javaconfig and statemachine. One user(outside of javaconfig) of this config model is uml based modeling which uses eclipse's uml xml file to load the config. Uml is your best bet as we don't have other build-in configuration hooks at this moment. contributions welcome ;)

You can configure the State machine dynamically using Builder. Builder is using same configuration interfaces behind the scenes that the #Configuration model using adapter classes.
Example:
StateMachine<String, String> buildMachine1() throws Exception {
Builder<String, String> builder = StateMachineBuilder.builder();
builder.configureStates()
.withStates()
.initial("S1")
.end("SF")
.states(new HashSet<String>(Arrays.asList("S1","S2","S3","S4")));
return builder.build();
}
Link to official docs: Dynamic Spring State Machine

Related

How to set properties of Tomcat JDBC Pool when using Spring Cloud Connector config?

I want to configure the properties of the Tomcat JDBC Pool with custom parameter values. The pool is bootstrapped by the spring-cloud (Spring Cloud Connector) environment (Cloud Foundry) and connected to a PostgreSQL database. In particular, I want to set the minIdle, maxIdle and initialSize properties for the given pool.
In a "spring-vanilla" environment (non-cloud) the properties can be easily set by using
application.properties / .yaml files with environment properties,
#ConfigurationProperties annotation.
However, this approach doesn't transfer to my Cloud environment, where the URL (and other parameters) are injected from the environment variable VCAP_SERVICES (via the ServiceInfo instances). I don't want to re-implement the logic which Spring Cloud already did with its connectors.
After some searching I also stumbled over some tutorials / guides, which suggest to make use of the PoolConfig object (e.g. http://cloud.spring.io/spring-cloud-connectors/spring-cloud-spring-service-connector.html#_relational_database_db2_mysql_oracle_postgresql_sql_server). However, that way one cannot set the properties I need but merely the following three:
minPoolSize,
maxPoolSize,
maxWaitTime.
Note that I don't want to set connection-related properties (such as charset), but the properties are associated with the pool itself.
In essence, I would like to do the configuration similarly to https://www.baeldung.com/spring-boot-tomcat-connection-pool (using spring.datasource.tomcat.* properties). The problem with that approach is that the properties are not considered if the datasource was created by Spring Cloud. The article https://dzone.com/articles/binding-data-services-spring, section "Using a CloudFactory to create a DataSource", claims that the following code snippet makes it so that the configuration "can be tweaked using application.properties via spring.datasource.* properties":
#Bean
#ConfigurationProperties(DataSourceProperties.PREFIX)
public DataSource dataSource() {
return cloud().getSingletonServiceConnector(DataSource.class, null);
}
However, my own local test (with spring-cloud:Greenwich.RELEASE and spring-boot-starter-parent:2.1.3.RELEASE) showed that those property values are simply ignored.
I found an ugly way to solve my problem but I think it's not appropriate:
Let spring-cloud create the DataSource, which is not the pooled DataSource directly,
check that the reference is a descendant of a DelegatingDataSource,
resolve the delegate, which is then the pool itself,
change the properties programmatically directly at the pool itself.
I do not believe that this is the right way as I am using internal knowledge (on the layering of datasources). Additionally, this approach does not work for the property initialSize, which is only considered when the pool is created.

Spring boot application properties load process change programatically to improve security

I have spring boot micro-service with database credentials define in the application properties.
spring.datasource.url=<<url>>
spring.datasource.username=<<username>>
spring.datasource.password=<<password>>
We do not use spring data source to create the connection manually. Only Spring create the database connection with JPA.(org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration)
We only provide the application properties, but spring create the connections automatically to use with the database connection pool.
Our requirement to enhance the security without using db properties in clear text. Two possible methods.
Encrypt the database credentials
Use the AWS secret manager. (then get the credential with the application load)
For the option1, jasypt can be used, since we are just providing the properties only and do not want to create the data source manually, how to do to understand by the spring framework is the problem. If better I can get some working sample or methods.
Regarding the option-2,
first we need to define secretName.
use the secertName and get the database credentials from AWS secret manager.
update the application.properties programatically to understand by spring framework. (I need to know this step)
I need to use either option1 and option2. Mentioned the issues with each option.
What you could do is use environment variables for your properties. You can use them like this:
spring.datasource.url=${SECRET_URL}
You could then retrieve these and start your Spring process using a ProcessBuilder. (Or set the variables any other way)
I have found the solution for my problem.
We need to define org.springframework.context.ApplicationListenerin spring.factories file. It should define the required application context listener like below.
org.springframework.context.ApplicationListener=com.sample.PropsLoader
PropsLoader class is like this.
public class PropsLoader implements ApplicationListener<ApplicationEnvironmentPreparedEvent> {
#Override
public void onApplicationEvent(ApplicationEnvironmentPreparedEvent event) {
ConfigurableEnvironment environment = event.getEnvironment();
String appEnv = environment.getProperty("application.env");
//set new properties based on the application environment.
// calling other methods and depends on the enviornment and get the required value set
Properties props = new Properties();
props.put("new_property", "value");
environment.getPropertySources().addFirst(new PropertiesPropertySource("props", props));
}
}
spring.factories file should define under the resources package and META-INF
folder.
This will set the application context with new properties before loading any other beans.

Mule connector config needs dynamic attributes

I have develop a new Connector. This connector requires to be configured with two parameters, lets say:
default_trip_timeout_milis
default_trip_threshold
Challenge is, I want read ${myValue_a} and ${myValue_a} from an API, using an HTTP call, not from a file or inline values.
Since this is a connector, I need to make this API call somewhere before connectors are initialized.
FlowVars aren't an option, since they are initialized with the Flows, and this is happening before in the Mule app life Cycle.
My idea is to create an Spring Bean implementing Initialisable, so it will be called before Connectors are init, and here, using any java based libs (Spring RestTemplate?) , call API, get values, and store them somewhere (context? objectStore?) , so the connector can access them.
Make sense? Any other ideas?
Thanks!
mmm you could make a class that will create the properties in the startup and in this class obtain the API properties via http request. Example below:
public class PropertyInit implements InitializingBean,FactoryBean {
private Properties props = new Properties();
#Override
public Object getObject() throws Exception {
return props;
}
#Override
public Class getObjectType() {
return Properties.class;
}
}
Now you should be able to load this property class with:
<context:property-placeholder properties-ref="propertyInit"/>
Hope you like this idea. I used this approach in a previous project.
I want to give you first a strong warning on doing this. If you go down this path then you risk breaking your application in very strange ways because if any other components depend on this component you are having dynamic components on startup, you will break them, and you should think if there are other ways to achieve this behaviour instead of using properties.
That said the way to do this would be to use a proxy pattern, which is a proxy for the component you recreate whenever its properties are changed. So you will need to create a class which extends Circuit Breaker, which encapsulates and instance of Circuit Breaker which is recreated whenever its properties change. These properties must not be used outside of the proxy class as other components may read these properties at startup and then not refresh, you must keep this in mind that anything which might directly or indirectly access these properties cannot do so in their initialisation phase or your application will break.
It's worth taking a look at SpringCloudConfig which allows for you to have a properties server and then all your applications can hot-reload those properties at runtime when they change. Not sure if you can take that path in Mule if SpringCloud is supported yet but it's a nice thing to know exists.

Spring Data MongoDB eliminate POJO's

My system is a dynamic telemetry system. We have hundreds of different spiders all sending telemetry back to the SpringBoot server, Everything is dynamic, driven by json files in Mongo, including the UI. We don't build the UI, as opposed to individual teams can configure their own UI for their needs, all by editing json docs.
We have the majority of the UI running and i began the middleware piece. We are using Spring Boot for the first time along with Spring Data Mongo with several MQ listeners for events. The problem is Spring Data. I started reading the docs on it and I realized the docs do not address using it without POJO's. I have this wonderfully dynamic model that changes per user per minute if the telemetry spiders dictate, I couldn't shackle this to a POJO if I tried. Is there a way to use Spring Data with a Map?
It seems from my experiments that the big issue is there is no way to tell the CRUD routines of the repository class what collection to query without a POJO.
Are my suspicions correct in that this won't work and am I better off ditching Spring Data and using the Mongo driver directly?
I don't think you can do it without a pojo when using spring-data. The least you could do is this
public interface NoPojoRepository extends MongoRepository<DummyPojo, String> {
}
and create a dummy pojo with just id and a Map.
#Data
public class DummyPojo {
#Id
private String id;
private Map<String, Object> value;
}
Since this value field is a map, you can store pretty much anything.

GemFire - Spring Boot Configuration

I am working on a project that has a requirement of Pivotal GemFire.
I am unable to find a proper tutorial about how to configure gemFire with Spring Boot.
I have created a partitioned Region and I want to configure Locators as well, but I need only server-side configuration as client is handled by someone else.
I am totally new to Pivotal GemFire and really confused. I have tried creating a cache.xml but then somehow a cache.out.xml gets created and there are many issues.
#Priyanka-
Best place to start is with the Guides on spring.io. Specifically, have a look at...
"Accessing Data with GemFire"
There is also...
"Cache Data with GemFire", and...
"Accessing GemFire Data with REST"
However, these guides focus mostly on "client-side" application concerns, "data access" (over REST), "caching", etc.
Still, you can use Spring Data GemFire (in a Spring Boot application even) to configure a GemFire Server. I have many examples of this. One in particular...
"Spring Boot GemFire Server Example"
This example demonstrates how to bootstrap a Spring Boot application as a GemFire Server (technically, a peer node in the cluster). Additionally, the GemFire properties are specified Spring config and can use Spring's normal conventions (property placeholders, SpEL expression) to configure these properties, like so...
https://github.com/jxblum/spring-boot-gemfire-server-example/blob/master/src/main/java/org/example/SpringBootGemFireServer.java#L59-L84
This particular configuration makes the GemFire Server a "GemFire Manager", possibly with an embedded "Locator" (indicated by the start-locator GemFie property, not to be confused with the "locators" GemFire property which allows our node to join and "existing" cluster) as well as a GemFire CacheServer to serve GemFire cache clients (with a ClientCache).
This example creates a "Factorials" Region, with a CacheLoader (definition here) to populate the "Factorials" Region on cache misses.
Since this example starts an embedded GemFire Manager in the Spring Boot GemFire Server application process, you can even connect to it using Gfsh, like so...
gfsh> connect --jmx-manager=localhost[1099]
Then you can run "gets" on the "Factorial" Region to see it compute factorials of the numeric keys you give it.
To see more advanced configuration, have a look at my other repos, in particular the Contacts Application RI (here).
Hope this helps!
-John
Well, I had the same problem, let me share with you what worked for me, in this case I'm using Spring Boot and Pivotal GemFire as cache client.
Install and run GemFire
Read the 15 minutes quick start guide
Create a locator(let's call it locator1) and a server(server1) and a region(region1)
Go to the folder where you started the 'Gee Fish'(gfsh) and then go to the locator's folder and open the log file, in that file you can get the port your locator is using.
Now let's see the Spring boot side:
In you Application with the main method add the #EnablegemFireCaching annotation
In the method(wherever it is) you want to cache, add the #Cacheable("region1") annotation.
Now let's create a configuration file for the caching:
//this is my working class
#Configuration
public class CacheConfiguration {
#Bean
ClientCacheFactoryBean gemfireCacheClient() {
return new ClientCacheFactoryBean();
}
#Bean(name = GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME)
PoolFactoryBean gemfirePool() {
PoolFactoryBean gemfirePool = new PoolFactoryBean();
gemfirePool.addLocators(Collections.singletonList(new ConnectionEndpoint("localhost", HERE_GOES_THE_PORT_NUMBER_FROM_STEP_4)));
gemfirePool.setName(GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME);
gemfirePool.setKeepAlive(false);
gemfirePool.setPingInterval(TimeUnit.SECONDS.toMillis(5));
gemfirePool.setRetryAttempts(1);
gemfirePool.setSubscriptionEnabled(true);
gemfirePool.setThreadLocalConnections(false);
return gemfirePool;
}
#Bean
ClientRegionFactoryBean<Long, Long> getRegion(ClientCache gemfireCache, Pool gemfirePool) {
ClientRegionFactoryBean<Long, Long> region = new ClientRegionFactoryBean<>();
region.setName("region1");
region.setLookupEnabled(true);
region.setCache(gemfireCache);
region.setPool(gemfirePool);
region.setShortcut(ClientRegionShortcut.PROXY);
return region;
}
That's all!, also do not forget to serialize(implements Serializable) the class is being cached(The class your cached method is returning)

Resources