Unable to create Global region using Spring Data Geode/Gemfire - spring

From spring data geode/gemfire, can we create Regions on the cluster ? Currently it is creating local cache region but any configuration being done either in ClientCache mode or ServerCache mode, doesn't have any impact on the Cluster server.
But using gfsh commands if we create a REPLICATE Region then the connectivity works fine. Is that the only way to create a REPLICATE
region in Gemfire/Geode cluster ?
Next, there are many documentation which refers to Region with GLOBAL scope but again in gfsh there is no way to create a Region with Scope GLOBAL, nor I could locate any configuration via Spring data geode.
Do we have any additional information on this ?
Regards,
Malaya
Searched the Geode/Gemfire documentation regarding any commannds but couldn't find any.
Tried to adapt the spring data geode/gemfire but even there also there is no option of GLOBAL region creation.

Spring Data for Apache Geode (SDG) does support pushing configuration metadata for Apache Geode Regions (and other Apache Geode objects, e.g. Indexes) to the cluster of servers using the SDG Cluster Configuration Push feature. However, this feature currently only pushes the Region "name" and DataPolicy type (Javadoc) to the servers from the client.
Additionally, "Cluster Configuration Push" only applies when your Spring [Boot] Data application is an Apache Geode ClientCache. If the application is a peer Cache, then this feature does not apply.
NOTE: Spring Boot for Apache Geode (SBDG) applies additional features on top of SDG's Cluster Configuration Push feature. See here. Again, this applies to clients only.
AFAIR, Scope.GLOBAL Regions only applies to REPLICATE Regions, first of all. That is, you cannot create a "GLOBAL" PARTITION Region. See Apache Geode docs for further details on Scope along with other Region distribution configuration attributes.
Assuming your Spring [Boot] Data for Apache Geode application were a peer Cache instance, then you can configure your REPLICATE Regions with a "GLOBAL" Scope as follows:
// Alternatively, you can use #CacheServerApplication
#PeerCacheApplication(name = "MySpringGeodeServer")
class MySpringDataGeodeApplication {
#Bean("MyRegion")
ReplicatedRegionFactoryBean myReplicateRegion(GemFireCache cache) {
ReplicatedRegionFactoryBean region = new ReplicatedRegionFactoryBean();
region.setCache(cache);
region.setScope(Scope.GLOBAL);
return region;
}
}
However, keep in mind this peer Cache, Spring-configured server application is NOT going to push the configuration to other servers in the cluster.
If you are using SDG Annotation-based configuration to (dynamically & conveniently) create Regions in your Spring peer Cache application, for example: using either #EnableEntityDefinedRegions or perhaps #EnableCachingDefinedRegions, then you will need to additionally rely on 1 or more RegionConfigurer bean definitions (see docs) to customize the configuration of individual Regions as the Annotation-based support does not enable fine-grained Region configuration customization of this nature (e.g. Scope on REPLICATE Regions).
This might look something like the following.
Given a persistent entity:
#Region("Customers")
class Customer {
// ...
}
Then:
#CacheServerApplication(name = "MySpringGeodeServer")
#EnableEntityDefinedRegions(
basePackageClasses = Customer.class,
serverRegionShortcut = RegionShortcut.REPLICATE
)
class MySpringDataGeodeApplication {
#Bean
RegionConfigurer customerRegionConfigurer() {
return new RegionConfigurer() {
#Override
public void configure(String beanName, PeerRegionFactoryBean<?, ?> region) {
if ("Customers".equals(beanName)) {
((ReplicatedRegionFactoryBean) region).setScope(Scope.GLOBAL);
}
}
}
}
}
NOTE: Alternatively, if you need such fine-grained control over Region (bean) configuration like this, then you should simply use the Java-based configuration rather than Annotations, anyway. Annotation-based configuration is primarily provided for convenience; it is not a 1-size fits all by any means.
Technically, you could also annotate your persistent entity classes (e.g. Customer) with 1 of the Region type-specific mapping annotations (Javadoc), rather than simply the generic #Region mapping annotation, as well, such as with #ReplicateRegion. This allows you to do things like:
#ReplicateRegion(name = "Customers", scope = Scope.GLOBAL)
class Customer {
// ...
}
Still, I generally prefer users to simply use the generic #Region mapping annotation, and again, if they need to do low-level configuration of Regions (like setting "Scope" on a REPLICATE Region) then simply use Java-based configuration as the opening example demonstrated.
Still, keep in mind, none of this is shared across the other servers inside the same cluster. Spring peer Cache applications do NOT push configuration metadata to other servers at all, and never will. This is sort of the point of using Apache Geode's Cluster Configuration Service anyhow.
NOTE: SDG peer Cache applications can be enabled (disabled by default) to inherit configuration from an existing cluster using Apache Geode's Cluster Configuration Service. For instance, see the useClusterConfiguration attribute (Javadoc) on the PeerCacheApplication annotation. There are strong reasons why SDG disabled this peer/server-side feature by default.
Upon reviewing this and this (not that Scope is something you can "alter" on a Region after the fact anyway), you are CORRECT, when using Gfsh, you cannot create a GLOBAL Scoped REPLICATE Region in the cluster, :(
In general, keep in mind that anything that is possible to do with Apache Geode's API, you can definitely do with Spring (Boot/Data) for Apache Geode and then some.
This is due in large part because SDG was built on Apache Geode's API and not some tool, like Gfsh.

Related

Spring Boot: Handle configuration in multitenant application

I am implementing a Spring Boot application which will be providing a multitenant environment. That is achieved in my case by using a database schema for each customer. Example see this project.
Now I am wondering how to implement tenant-specific configurations. I am using #ConfigurationProperties to bundle my property values, but these are getting instantiated once and not for each tenant.
What if I would like to use Spring Cloud Config with multiple tenant specific git repository as an configuration backend. Would it be possible when using a jdbc backend for Spring Cloud Config?
Is there any way with default Spring mechanisms or do I have to implement a database based configuration framework myself?
Edit: For example I have two tenants called Tenant1 and Tenant2. Both are running over the same application in the same context and are writing in the database schemes tenant_1 and tenant_2.
Identification of tenants is happening over keycloak (see Spring Keycloak multi tenant example). So I identify the tenantId from the jwt token and select the database connection like described here.
But now I would need the same mechanism for #Configuration beans. Since #Configuration beans are as far as I know Singletons, so there is always ONE configuration per application scope, and not ONE configuration per tenant.
So using Spring Cloud Config Tenant1 is using https://git-url/tenant1, Tenant2 is using Hashicorp Vault as backend and perhaps Tenant3 will be using a jdbc based configuration backend. And all of that in ONE (of course scalable) application.
In case your application uses tenant specific files (html templates etc), the following can be applied. As I have used the below approach for handling many tenants and works fine and easy to maintain.
I would suggest that you maintain a consistent configuration source (JDBC) for all of your tenant configurations. This helps you have a single source that is cacheable and scalable for your application. Also, you could have your tenants navigate to a configuration page to manage their settings and alter them to suit their needs at any point of time on the fly. (Example Settings: Records Per Page, Theme, Logo, Filters etc...)
Having the tenant configuration in files in git will be a difficult task when you wanted to auto-provision tenant's when they sign-up as it will involve couple of distributed services. Having them in a TenantSettings table with the tenantId as a column could help you get the data in no time and will be easy.
You can use Spring Cloud Config for your scenario and it is adoptable. It is easily configurable and provides out of the box features. For your specific scenario, you can have any number of microservices running yet all controlled by one Spring Cloud Config Server which is connected to one Git Repository. Your all microservices are asking configuration properties from Spring Cloud Config Server and it is directly fetching properties from Git Repository. That repository can have multiple property files. It can hold common properties for all the microservices or specific service based configuration properties. If you want to keep confidential properties more securely, that is also made possible via HashiCorp vault. I will leave an image below for you to get a better idea about this concept.
In the below image, you can see the Git Repository with common configuration property files and specific configuration property files for different services yet in same repository.
I will add another image for you to get a better idea how does this can be arranged with application profiles as well.
Finally I will add something additional to show the power of Spring Cloud Config and out of the box features it allows us to play with. You can automatically refresh configuration properties in running application as well. You can configure Spring Cloud Config to do that. I will add an architectural diagram to achieve that.
References for this answer is taken from Spring in Action, Fifth Edition
Craig Walls

Use eventstore with Axon Framework 3 and Spring Boot

I'm trying to realize a simple distributed applications, and I would like to save all the events into the event store.
For this reason, as suggested in the "documentation" of Axon here, I would like to use Mysql as event store.
Since I haven't very much experience with Spring, I cannot understand how to getting it working.
I would have two separate services one for the command side and one for the query side. Since I'm planning to have more services, I would like to know how to configure them to use an external event store (not stored inside of any of these services).
For the distribution of the commands and events, I'm using RabbitMQ:
#Bean
public org.springframework.amqp.core.Exchange exchange() {
return ExchangeBuilder.fanoutExchange("AxonEvents").build();
}
#Bean
public Queue queue() {
return QueueBuilder.durable("AxonEvents").build();
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("*").noargs();
}
#Autowired
public void configure(AmqpAdmin admin)
{
admin.declareExchange(exchange());
admin.declareQueue(queue());
admin.declareBinding(binding());
}
This creates the required queue on a local running RabbitMQ instance (with default username and password).
My question is: How can I configure Axon to use mysql as an event store?
As the Reference Guide currently does not specify this, I am gonna point this out here.
Currently you've roughly got two approaches you follow when distributing an Axon application or separating an Axon application into (micro) services:
Use an full open source approach
Use AxonHub / AxonDb
Taking approach 2, which you can do in a developer environment, you would only have to run AxonHub and AxonDb and configure them to your application.
That's it, your done; you can scale out your application and all the messages are routed as desired.
If you want to take route 1 however, you will have to provide several configurations
Firstly, you state you use RabbitMQ to route commands and events.
In fact, the framework does not simply allow using RabbitMQ to route commands at all. Do note it is a solution to distribute EventMessages, just not CommandMessages.
I suggest either using JGroups or Spring Cloud to route your commands in a open-source scenario (I have added links to the Reference Guide pages regarding distributing the CommandBus for JGroups and Spring Cloud).
To distribute your events, you can take three approaches:
Use a shared database for your events.
Use AMQP to send your evens to different instances.
Use Kafka to send your evens to different instances.
My personal preference when starting an application though, is to begin with one monolith and separate when necessary.
I think the term 'Evolutionary Micro Services' catches this nicely.
Any how, if you use the messaging paradigm supported by Axon to it's fullest, splitting out the Command side from the Query side after wards should be quite simple.
If you'd in addition use the AxonHub to distribute your messages, then you are practically done.
Concluding though, I did not find a very exact request from your issues.
Does this give you the required information to proceed, #Federico Ponzi?
Update
After having given it some thought, I think your solution is quite simple.
You are using Spring Boot and you want to set up your EventStore to use MySQL. For Axon to set the right EventStorageEngine (the infra component used under the covers to read/write events), you can simply add a dependency on the spring-boot-starter-data-jpa.
Axon it's auto configuration will in that scenario automatically notice that you have Spring Data JPA on your classpath, and as such will set the JpaEventStorageEngine.

Is there a way Spring bean afterProperties method can pick up new settings added after server startup

We are constructing instance of CouchBase cluster in Spring singleton bean afterProperties() method by reading the configurations (like hosts, ports, connection time outs,..). This is working well.
We were using Apache hierarchical configuration for the configurations. Apache hierarchical configurations has reload strategy and does not require server restart after the configuration changed. New configurations will be reflected in 2 minutes.
Now we got requirement to update CouchBase configurations at run time. But since Spring #afterProperties is bean life cycle method and does not execute again, we could not able to achieve what we are looking for.
Right now, we need to restart the server(Tomcat) to reflect the new settings.
Is there any mechanism in Spring or any other better approach to fulfill our requirement ( singleton bean capable to handle configuration change at run time).
Thinking in better design perspective, please provide your thoughts.

WSO2 Identity Server - Custom JDBC User Store Manager - JDBC Pools

WSO2 Identity Server 5.0.0 (and some patches ;))
It does not appear that custom JDBC user store managers (child of JDBCUserStoreManager) use a JDBC pool. I'm noticing that I can end up session closed errors and sql exceptions whereas the Identity Server itself is still operating OK with its separate database connection (a configured pool).
So I guess I have two questions about this:
Somewhere up the chain, is there a JDBC pool for the JDBCUserStoreManager? If so, are there means to configure that guy more robustly?
Can I create another JDBC datasource in master-datasources.xml which my custom JDBC user store manage could reference?
Instead of using your own datasources/connections, you can import Carbon Datasources and use those (they come with inbuilt pooling and no need to worry about any configurations etc). You can either access these programmatically by directly calling ndatasource component or access them via JNDI.
To access them directly from ndatasource component:
The dependency:
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.ndatasource.core</artifactId>
<version>add_correct_version_here</version>
</dependency>
(You can check repository/components/plugins to find out the correct version for above dependency)
You can inject DataSourceService as in this code (the #scr.reference tag refers to the service you need to inject, this uses maven scr plugin to parse these dependencies when building the bundle).
Note that when you follow this approach you'll have to build the jar as an OSGi bundle as it uses declarative services (and have to place it in repository/components/dropins). Otherwise the dependencies won't be injected at runtime.
Next, you can access all the data sources as:
List<CarbonDataSource> dataSources = dataSourceService.getAllDataSources();
Rajeev's answer was really insightful and helped with investigating and evaluating what I should do. But, I didn't end up using that route. :)
I ended up looking through the Identity Server and Carbon source code and found out that the JDBCUserStoreManager does end up creating a JDBC pool configured by the properties you set for that manager. I had a class called CustomUserStoreConstants for my custom user store manager which had setMandatoryProperty called by default to set:
JDBCRealmConstants.DRIVER_NAME
JDBCRealmConstants.URL
JDBCRealmConstants.USER_NAME
JDBCRealmConstants.PASSWORD
So the pool was configured with these values, BUT that was it...nothing else. So no wonder it wasn't surviving the night!
It turned out that the code setting this up, if it found a value for the JDBCRealmConstants.DATASOURCE in the config params, it would just load up that datasource and ignore any other params set. Seeing that, I got rid of those 4 params listed above and forced my custom user store to only allow having a DATASOURCE and I set it in code with the default JNDI name that I would name that datasource always. With that, I was able to configure my JDBC pool for this datasource with all params such as testOnBorrow, validationQuery, validationInterval, etc in master-datasources.xml. Now the only thing that would ever need to change is the datasource's configuration in that file.
The other reason I went with the datasource in the master-datasources.xml is that I didn't have to decided in my custom user store's code which parameters I would want to have or not have and just manage it all in the xml file easily. This really has advantages with portability of configs and IT involvement for deployments and debugging. I already have other datasources in this file for the IS deployment.
All said, my user store is now living through the night and weekends. :)

blocking consumers in apache camel using existing components

I would like to use the Apache Camel JDBC component to read an Oracle table. I want Camel to run in a distributed environment to meet availability concerns. However, the table I am reading is similar to a queue, so I only want to have a single reader at any given time so I can avoid locking issues (messy in Oracle).
If the reader goes down, I want another reader to take over.
How would you accomplish this using the out-of-the-box Camel components? Is it possible?
It depends on your deployment architecture. For example, if you deploy your Camel apps on Servicemix (or ActiveMQ) in a master/slave configuration (for HA), then only one consumer will be active at a given time...
But, if you need multiple running (clustered for scalability), then (by default) they will compete/duplicate reads from the table unless you write your own locking logic.
This is easy using Hazelcast Distributed Locking. There is a camel-hazelcast component, but it doesn't support the lock API. Once you configure your apps to participate in a Hazelcast cluster, then just just the lock API around any code that you need to synchronize for a given object...
import com.hazelcast.core.Hazelcast;
import java.util.concurrent.locks.Lock;
Lock lock = Hazelcast.getLock(myLockedObject);
lock.lock();
try {
// do something here
} finally {
lock.unlock();
}

Resources