I am working on a project that has a requirement of Pivotal GemFire.
I am unable to find a proper tutorial about how to configure gemFire with Spring Boot.
I have created a partitioned Region and I want to configure Locators as well, but I need only server-side configuration as client is handled by someone else.
I am totally new to Pivotal GemFire and really confused. I have tried creating a cache.xml but then somehow a cache.out.xml gets created and there are many issues.
#Priyanka-
Best place to start is with the Guides on spring.io. Specifically, have a look at...
"Accessing Data with GemFire"
There is also...
"Cache Data with GemFire", and...
"Accessing GemFire Data with REST"
However, these guides focus mostly on "client-side" application concerns, "data access" (over REST), "caching", etc.
Still, you can use Spring Data GemFire (in a Spring Boot application even) to configure a GemFire Server. I have many examples of this. One in particular...
"Spring Boot GemFire Server Example"
This example demonstrates how to bootstrap a Spring Boot application as a GemFire Server (technically, a peer node in the cluster). Additionally, the GemFire properties are specified Spring config and can use Spring's normal conventions (property placeholders, SpEL expression) to configure these properties, like so...
https://github.com/jxblum/spring-boot-gemfire-server-example/blob/master/src/main/java/org/example/SpringBootGemFireServer.java#L59-L84
This particular configuration makes the GemFire Server a "GemFire Manager", possibly with an embedded "Locator" (indicated by the start-locator GemFie property, not to be confused with the "locators" GemFire property which allows our node to join and "existing" cluster) as well as a GemFire CacheServer to serve GemFire cache clients (with a ClientCache).
This example creates a "Factorials" Region, with a CacheLoader (definition here) to populate the "Factorials" Region on cache misses.
Since this example starts an embedded GemFire Manager in the Spring Boot GemFire Server application process, you can even connect to it using Gfsh, like so...
gfsh> connect --jmx-manager=localhost[1099]
Then you can run "gets" on the "Factorial" Region to see it compute factorials of the numeric keys you give it.
To see more advanced configuration, have a look at my other repos, in particular the Contacts Application RI (here).
Hope this helps!
-John
Well, I had the same problem, let me share with you what worked for me, in this case I'm using Spring Boot and Pivotal GemFire as cache client.
Install and run GemFire
Read the 15 minutes quick start guide
Create a locator(let's call it locator1) and a server(server1) and a region(region1)
Go to the folder where you started the 'Gee Fish'(gfsh) and then go to the locator's folder and open the log file, in that file you can get the port your locator is using.
Now let's see the Spring boot side:
In you Application with the main method add the #EnablegemFireCaching annotation
In the method(wherever it is) you want to cache, add the #Cacheable("region1") annotation.
Now let's create a configuration file for the caching:
//this is my working class
#Configuration
public class CacheConfiguration {
#Bean
ClientCacheFactoryBean gemfireCacheClient() {
return new ClientCacheFactoryBean();
}
#Bean(name = GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME)
PoolFactoryBean gemfirePool() {
PoolFactoryBean gemfirePool = new PoolFactoryBean();
gemfirePool.addLocators(Collections.singletonList(new ConnectionEndpoint("localhost", HERE_GOES_THE_PORT_NUMBER_FROM_STEP_4)));
gemfirePool.setName(GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME);
gemfirePool.setKeepAlive(false);
gemfirePool.setPingInterval(TimeUnit.SECONDS.toMillis(5));
gemfirePool.setRetryAttempts(1);
gemfirePool.setSubscriptionEnabled(true);
gemfirePool.setThreadLocalConnections(false);
return gemfirePool;
}
#Bean
ClientRegionFactoryBean<Long, Long> getRegion(ClientCache gemfireCache, Pool gemfirePool) {
ClientRegionFactoryBean<Long, Long> region = new ClientRegionFactoryBean<>();
region.setName("region1");
region.setLookupEnabled(true);
region.setCache(gemfireCache);
region.setPool(gemfirePool);
region.setShortcut(ClientRegionShortcut.PROXY);
return region;
}
That's all!, also do not forget to serialize(implements Serializable) the class is being cached(The class your cached method is returning)
Related
In Gfsh, I was able to do: create region --name=employee --type=REPLICATE --enable-statistics=true --entry-time-to-live-expiration=900.
We have a requirement to create a Region using Java using the #EnableEntityDefinedRegions annotation. When I use describe in Gfsh the Regions are showing, but entity time to live expiration (TTL) is not setting by using below ways.
Any idea how to set TTL in Java?
Spring Boot 2.5.x and spring-gemfire-starter 1.2.13.RELEASE are used in the app.
#EnableStatistics
#EnableExpiration(policies = {
#EnableExpiration.ExpirationPolicy(regionNames = "Employee", timeout = 60, action = ExpirationActionType.DESTROY))
})
#EnableEntityDefinedRegions
public class BaseApplication {
....
#Region("Employee")
public class Employee {
or
#EnableStatistics
#EnableExpiration
public class BaseApplication {
----
#Region("Employee")
#TimeToLiveExpiration(timeout = "60", action = "DESTROY")
#Expiration(timeout = "60", action = "DESTROY")
public class Employee {
....
or
using bean creation way also not working, getting error "operation is not supported on a client cache"
#EnableEntityDefinedRegions
//#PeerCacheApplication for peer cache Region is not creating PCC gemfire
public class BaseApplication {
---
}
#Bean(name="employee")
PartitionedRegionFactoryBean<String, Employee> getEmployee
(final GemFireCache cache,
RegionAttributes<String, Employee> peopleRegionAttributes) {
PartitionedRegionFactoryBean<String, Employee> getEmployee = new PartitionedRegionFactoryBean<String, Employee>();
getEmployee.setCache(cache);
getEmployee.setAttributes(peopleRegionAttributes);
getEmployee.setCache(cache);
getEmployee.setClose(false);
getEmployee.setName("Employee");
getEmployee.setPersistent(false);
getEmployee.setDataPolicy( DataPolicy.PARTITION);
getEmployee.setStatisticsEnabled( true );
getEmployee.setEntryTimeToLive( new ExpirationAttributes(60) );
return getEmployee;
}
#Bean
#SuppressWarnings("unchecked")
RegionAttributesFactoryBean EmployeeAttributes() {
RegionAttributesFactoryBean EmployeeAttributes = new RegionAttributesFactoryBean();
EmployeeAttributes.setKeyConstraint( String.class );
EmployeeAttributes.setValueConstraint( Employee.class );
}
First, Spring Boot for Apache Geode (SBDG) 1.2.x is already EOL because Spring Boot 2.2.x is EOL (see details on support). SBDG follows Spring Boot's support lifecycle and policies.
Second, SBDG 1.2.x is based on Spring Boot 2.2.x. See the Version Compatibility Matrix for further details. We will not support mismatched dependency versions. While mismatched dependency versions may work in certain cases (mileage varies depending on your use case), the version combinations not explicitly stated in the Version Compatibility Matrix will not be supported none-the-less. Also see the documentation on this matter.
Now, regarding your problem with TTL Region entry expiration policies...
SBDG auto-configuration creates an Apache Geode ClientCache instance by default (see docs). You cannot create a PARTITION Region using a ClientCache instance.
If your Spring Boot application is intended to be a peer Cache instance in an Apache Geode cluster (server-side), then you must explicitly declare your intention by overriding SBDG's auto-configuration, like so:
#PeerCacheApplication
#SpringBootApplication
class MySpringBootApacheGeodePeerCacheApplication {
// ...
}
TIP: See the documentation on creating peer Cache applications using SBDG.
Keep in mind that when you override SBDG's auto-configuration, then you may necessarily and implicitly be responsible for other aspects of Apache Geode's configuration, e.g. Security! Heed the warning.
On the other hand, if your intent is to truly enable your Spring Boot/SBDG application as a cache "client" (i.e. a ClientCache instance, the default), then TTL Region entry expiration policies do not make sense on client PROXY Regions, which is the default DataPolicy (EMPTY) for client Regions when using the Spring Data for Apache Geode (SDG) #EnableEntityDefinedRegions annotation (see Javadoc). This is because Apache Geode client PROXY Regions do not store any data locally. All data access operations are forward to the server/cluster.
Even if you alter the configuration to use client CACHING_PROXY Regions, the TTL Region expiration policies will only take effect locally. You must configure your corresponding server/cluster Regions, separately (e.g. using Gfsh).
Also, even though you can push cluster configuration from the client using SDG's #EnableClusterConfiguration (doc, Javadoc), or alternatively and preferably, SBDG's #EnableClusterAware annotation (doc, Javadoc; which is meta-annotated with SDG's #EnableClusterConfiguation), this functionality only pushes Region and Index configuration to the cluster, not expiration policies.
See the SBDG documentation on expiration for further details. This doc also leads to SDG's documentation on expiration, and specifically Annotation-based expiration configuration.
I see that the SBDG docs are not real clear on the matter of expiration, so I have filed an Issue ticket in SBDG to make this more clear.
This is regarding CDI spec of quarkus. Would want to understand is there a configuration bean for quarkus? How does one do any sort of configuration in quarkus?
If I get it right the original question is about #Configuration classes that can contain #Bean definitions. If so then CDI producer methods and fields annotated with #javax.enterprise.inject.Produces are the corresponding alternative.
Application configuration is a completely different question though and Jay is right that the Quarkus configuration reference is the ultimate source of information ;-).
First of all reading how cdi spec of quarkus differs from spring is important.
Please refer this guide:
https://quarkus.io/guides/cdi-reference
The learnings from this guide is there is #Produces which is an alternative to #Configuration bean in Quarkus.
Let us take an example for libs that might require a configuration through code. Example: Microsoft Azure IOT Service Client.
public class IotHubConfiguration {
#ConfigProperty(name="iothub.device.connection.string")
String connectionString;
private static final Logger LOG = Logger.getLogger(IotHubConfiguration.class);
#Produces
public ServiceClient getIot() throws URISyntaxException, IOException {
LOG.info("Inside Service Client bean");
if(connectionString==null) {
LOG.info("Connection String is null");
throw new RuntimeException("IOT CONNECTION STRING IS NULL");
}
ServiceClient serviceClient = new ServiceClient(connectionString, IotHubServiceClientProtocol.AMQPS);
serviceClient.open();
LOG.info("opened Service Client Successfully");
return serviceClient;
}
For all libs vertically intergrated with quarkus application.properties can be used and then you will get a driver obj for that broker/dbs available directly through #Inject in your #applicationScoped/#Singleton bean So, Why is that?
To Simplify and Unify Configuration
To Make Sure no code is required for configuring anything i.e. database config, broker config , quarkus config etc.
This drastically reduces the amount of code written for configuring and also Junits needed to cover that code.
Let us take an example where kafka producer configuration needs to be added: in application.properties
kafka.bootstrap.servers=${KAFKA_BROKER_URL:localhost:9092}
mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:input_topic1}
mp.messaging.outgoing.incoming_kafka_topic_test.connector=smallrye-kafka
mp.messaging.outgoing.incoming_kafka_topic_test.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.outgoing.incoming_kafka_topic_test.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.outgoing.incoming_kafka_topic_test.health-readiness-enabled=true
For full blown project reference: https://github.com/JayGhiya/QuarkusExperiments/tree/initial_version_v1/KafkaProducerQuarkus
Quarkus References for Config:
https://quarkus.io/guides/config-reference
Example for reactive sql config: https://quarkus.io/guides/reactive-sql-clients
Now let us talk about a bonus feature that quarkus provides which improves developer experience by atleast an order of magnitude that is profile driven development and testing.
Quarkus provides three profiles:
dev - Activated when in development mode (i.e. quarkus:dev)
test - Activated when running tests
prod - The default profile when not running in development or test
mode
Let us just say that in the given example you wanted to have different topics for development and different topics for production. Let us achieve that!
%dev.mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:input_topic1}
%prod.mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:prod_topic}
This is how simple it is. This is extremely useful in cases where your deployments run with ssl enabled brokers/dbs etc and for dev purposes you have unsecure local brokers/dbs. This is a game changer.
I want to configure the properties of the Tomcat JDBC Pool with custom parameter values. The pool is bootstrapped by the spring-cloud (Spring Cloud Connector) environment (Cloud Foundry) and connected to a PostgreSQL database. In particular, I want to set the minIdle, maxIdle and initialSize properties for the given pool.
In a "spring-vanilla" environment (non-cloud) the properties can be easily set by using
application.properties / .yaml files with environment properties,
#ConfigurationProperties annotation.
However, this approach doesn't transfer to my Cloud environment, where the URL (and other parameters) are injected from the environment variable VCAP_SERVICES (via the ServiceInfo instances). I don't want to re-implement the logic which Spring Cloud already did with its connectors.
After some searching I also stumbled over some tutorials / guides, which suggest to make use of the PoolConfig object (e.g. http://cloud.spring.io/spring-cloud-connectors/spring-cloud-spring-service-connector.html#_relational_database_db2_mysql_oracle_postgresql_sql_server). However, that way one cannot set the properties I need but merely the following three:
minPoolSize,
maxPoolSize,
maxWaitTime.
Note that I don't want to set connection-related properties (such as charset), but the properties are associated with the pool itself.
In essence, I would like to do the configuration similarly to https://www.baeldung.com/spring-boot-tomcat-connection-pool (using spring.datasource.tomcat.* properties). The problem with that approach is that the properties are not considered if the datasource was created by Spring Cloud. The article https://dzone.com/articles/binding-data-services-spring, section "Using a CloudFactory to create a DataSource", claims that the following code snippet makes it so that the configuration "can be tweaked using application.properties via spring.datasource.* properties":
#Bean
#ConfigurationProperties(DataSourceProperties.PREFIX)
public DataSource dataSource() {
return cloud().getSingletonServiceConnector(DataSource.class, null);
}
However, my own local test (with spring-cloud:Greenwich.RELEASE and spring-boot-starter-parent:2.1.3.RELEASE) showed that those property values are simply ignored.
I found an ugly way to solve my problem but I think it's not appropriate:
Let spring-cloud create the DataSource, which is not the pooled DataSource directly,
check that the reference is a descendant of a DelegatingDataSource,
resolve the delegate, which is then the pool itself,
change the properties programmatically directly at the pool itself.
I do not believe that this is the right way as I am using internal knowledge (on the layering of datasources). Additionally, this approach does not work for the property initialSize, which is only considered when the pool is created.
My retail application has various contexts like receive, transfer etc. The requests to these contexts are handled by RESTful microservices developed using Spring Boot. The persistence layer is Cassandra. This is shared by all services as we couldn't do a vertical scaling for microservices at DB level as the services are tightly coupled conceptually.
We want vertical scaling at GemFire end by creating different Regions for different contexts.
For example, a BOX table in Cassandra will be updated by Region Box-Receive(receive context) and Region Box-Transfer(transfer context) via CacheWriter.
Our problem is how to maintain data sync between these two Regions?
Please suggest any other approach also for separation at GemFire end.
gemfire version-
<dependency>
<groupId>com.gemstone.gemfire</groupId>
<artifactId>gemfire</artifactId>
<version>8.2.6</version>
</dependency>
One alternative approach, since you are using Spring Boot would be to do the following:
First annotation your #SpringBootApplication class with #EnableGemfireCacheTransactions...
Example:
#SpringBootApplication
#EnableGemfireCacheTransactions
#EnableGemfireRepositories
class YourSpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(YourSpringBootApplication.class, args);
}
...
}
The #EnableGemfireCacheTransactions annotation enables Spring Data GemFire's GemfireTransactionManager, which integrates GemFire's CacheTransactionManager with Spring Transaction Management infrastructure which then allows you to do this...
Now, just annotate your #Service application component transactional service methods with core Spring's #Transactional annotation, like so...
#Service
class YourBoxReceiverTransferService {
#Transactional
public <return-type> update(ReceiveContext receiveContext,
TransferContext transferContext {
...
receiveContextRepository.save(receiveContext);
transferContextRepository.save(transferContext);
...
}
}
As you can see here, I also used Spring Data (GemFire's) Repository infrastructure to manage the persistence operations (e.g. CRUD), which will be used appropriately in the transactional scoped-context setup by Spring.
2 advantages with the Spring approach, over using GemFire's public API, which unnecessarily couples you to GemFire (a definite code smell, particularly in a Spring context), is...
You don't have to place a bunch of boilerplate, crap code in to your application components, which does not belong there!
Using Spring's Transaction Management infrastructure, it is extremely easy to change your Transaction Management Strategy, such as by switching from GemFire's local-only cache transactions, to say, Global, JTA-based Transactions if the need every arises (such as, oh, well, now I need to send a message over a JMS message queue after the GemFire Region's and Cassandra BOX Table are updated to notify some downstream process that the Receiver/Transfer context has been updated). With Spring's Transaction Management infrastructure, you do not need to change a single line of application code to change transaction management strategies (such as local to global, or global to local, etc).
Hope this helps!
-John
You can use transactions. Something like this should work:
txMgr = cache.getTransactionManager();
txMgr.begin();
boxReceive.put();
...
boxtransfer.put();
txMgr.commit();
This will work provided you co-locate the box-receive and the box-transfer region and use the same key, or use a PartitionResolver to colocate the data.
Some queries on spring state machine.
Can we have more than one state machine in a single spring project,
where in one state machine serves for one work flow (may be a CD
player work flow) and the other for a turnstile?
Can I dynamically load the configuration in my config class, for instance from a big data source having JSON formatted data, where we stores our states, events, transitions etc.
One of my requirement is I may be having a frequently changing worklow or model, which I needs to configured in my spring project. How can I effectively do that with spring state machine.
1) You can have multiple machines. #EnableStateMachine has id property for a bean name. You can expose config as #EnableStateMachineFactory. If you want to work outside of javaconfig there is a manual builder model for it.
2/3) There is a public configuration api between javaconfig and statemachine. One user(outside of javaconfig) of this config model is uml based modeling which uses eclipse's uml xml file to load the config. Uml is your best bet as we don't have other build-in configuration hooks at this moment. contributions welcome ;)
You can configure the State machine dynamically using Builder. Builder is using same configuration interfaces behind the scenes that the #Configuration model using adapter classes.
Example:
StateMachine<String, String> buildMachine1() throws Exception {
Builder<String, String> builder = StateMachineBuilder.builder();
builder.configureStates()
.withStates()
.initial("S1")
.end("SF")
.states(new HashSet<String>(Arrays.asList("S1","S2","S3","S4")));
return builder.build();
}
Link to official docs: Dynamic Spring State Machine