I am developping an application in Java EE, I would like to implement a cache using a #Singleton EJB. This caches, referencial data, so I only need retrieved once from the DB and then store it in memory.
I would like to know from an implementation point of view if this is correct, using a #Singleton EJB ? or could you recommend me another approach ? and also if this is correct from an OOP perspective ?
And the #Singleton EJB is for read-only, is there any concurrency issues I could encounter ?
Regards,
The approach is ok, but the drawback is that is it not simple to enhance the solution later - no nice interface.
But possible with every JavaEE server without any migration effort as it is standard JavaEE.
Another solution depends a bit on the server you use.
WildFly (community): you might use the internal infinispan subsystem and use is in a HashMap manner. You can simple use it local to start with and change the configuration to clustered (replicated or distributes) if the cache grow and you need more memory to cache it.
JBoss EAP (Enterprise Product): Here you can't use the Infinispan subsystem, technical it is possible but it is not supported. You need to use the additional JBossDataGrid (JDG) which is based on infinispan.
Here you have more options, same as above use the cache in the same JVM local or dist/repl. Or on a different instance with remote access to the cache - often fast enough but you have one remote access - but the JVM is complete separated from the server and can be started maintained different. Also the server and cache did not affect each others memory.
For other vendors you can use the JDG approach (or Infinispan as OpenSource) also.
As a quick and easy solution Singleton EJB can help, especially if catalogs whose values do not change.
Just consider your EJB Singleton establish the following:
Concurrency Management by Container
All methods for establishing LockType.READ accessed concurrently by any arbitrary number of clients
For example:
import java.util.List;
import javax.annotation.PostConstruct;
import javax.ejb.ConcurrencyManagement;
import javax.ejb.ConcurrencyManagementType;
import javax.ejb.Lock;
import javax.ejb.LockType;
import javax.ejb.Singleton;
import javax.ejb.Startup;
#Singleton
#Startup
#ConcurrencyManagement(ConcurrencyManagementType.CONTAINER)
public class InitializationBean {
#PostConstruct
public void initialize() {
// load data
}
#Lock(LockType.READ)
public List<String> getCatalog01() {
return null;
}
#Lock(LockType.READ)
public List<String> getCatalog02() {
return null;
}
}
Related
Context: I need to provide a way to change parameter values during production on lower performance cost as possible.
Goal: I want change annotation values on fly and apply it at once on all microservices instances.
Personal background and limitations: I know I can use Spring Cloud Config to change parameters on the fly like explained in this article and I Know there is some challenges and pitfalls involved on changing annotations on the fly also like discussed in stackoveflow question.
I know that Spring Cloud Config can be used for setting up a centralized configuration applied to all microservice instances during boot/start. I have used it a bit. I am wondering if I can use it for centralizing parameters that can affect customized annotations on fly.
An imagined solution is:
... whenever I need somepRopertyValue
#Value("${config.somePropertyValue}")
private String somePropertyValue;
#Bean
public String somePropertyValue(){
return somePropertyValue;
}
A config client in all microservices endpoint that must be call not only when the application start but whenever somePropertyValue managed in Spring Cloud Config Server bootstrap.properties is updated:
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
#SpringBootApplication
public class SpringConfigClientApplication {
public static void main(String[] args) {
SpringApplication.run(SpringConfigClientApplication.class, args);
}
}
#RefreshScope
#RestController
class MessageRestController {
#Value("${server.somePropertyValue:Unable to connect to config server}")
private String somePropertyValue;
#RequestMapping("/server/somePropertyValue")
String getSomePropertyValue() {
return this.somePropertyValue;
}
}
And somehow somePropertyValue is maintened in Spring Cloud Config and if change during production time it affects on demand everywhere somePropertyValue is annoted in all microservice instances.
I am currently reaching this behaviour by adding a kafka consumer in all SpringBoot microservices that listen/observe a topic and when it receives a new messagge it changes on the fly the parameter value. It seems so odd that I created a Kafka dependency in all company microservices. Since I have used Spring Config for a bit similar scenario I am wondering if there is a better alternative using some out-of-box Spring approach. Also performance is highly important in my case and a bit delay on syncronize all parameters isn't an issue. By delay I mean that two or three seconds to update parameters in all microservices isn't an issue.
There are two ways to do that:
i- There's a refresh endpoint, and you can actually call that for a service, and it'll actually refresh its configurations without restarting itself, which is pretty neat. e.g. MS-A is listing on 8080 then do a POST request at this endpoint:
localhost:8080/refresh.
NOTE: Spring Actuator actually adds a RefreshEndpoint to the app automatically when we annotate a controller in MS-A with #RefreshScope.
ii- What you can also do is use Spring Cloud Bus, and broadcast an event, and then every service listens on that and refreshes itself. That's handy if you have dozens of services all using the Config Server, and you don't want to go one by one and hit a /refresh endpoint as we have did in 1st approach. You just want to broadcast a message to a bus and have all these things automatically pick it up.
Reference: Both concepts I've learnt while taking course at Pluralsight
I'm using Spring Boot 2.2.4 with embedded Undertow.
I've enabled the access log using server.underdow.accesslog.enabled=true and everything works as expected.
I'm utilizing the actuator endpoints on a different port which sets up a child context. I do not want requests to the actuator to be logged. Currently they automatically go to management_access.log where access. is the prefix of my main access log.
Any ideas on how to disable that access log? I know Spring is creating a separate WebServer via Factory for the actuator context, but I haven't found a way to customize the factory.
I found my own answer (spent way too much time doing it).
It's a little bit of a hack, but it works:
New configuration class: foo.ManagementConfig
package foo;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.actuate.autoconfigure.web.ManagementContextConfiguration;
import org.springframework.boot.web.embedded.undertow.UndertowServletWebServerFactory;
import org.springframework.boot.web.server.WebServerFactoryCustomizer;
import org.springframework.context.annotation.Bean;
#ManagementContextConfiguration
public class ManagementConfig {
#Bean
WebServerFactoryCustomizer<UndertowServletWebServerFactory> actuatorCustomizer(#Value("${management.server.port}") int managementPort) {
return factory -> {
if (managementPort == factory.getPort()) {
factory.setAccessLogEnabled(false);
}
};
}
}
Created resources/META-INF/spring.factories so that it gets picked up by the ManagementContext:
org.springframework.boot.actuate.autoconfigure.web.ManagementContextConfiguration=foo.ManagementConfig
The part that's a bit of a hack is the if statement. It would have been great if it applied only to the management context, but for some reason it's trying to apply to both. With the if statement, it just doesn't do anything for the primary context.
This would have unintended consequences if management.server.port was undefined or if it was the same as the primary context.
My retail application has various contexts like receive, transfer etc. The requests to these contexts are handled by RESTful microservices developed using Spring Boot. The persistence layer is Cassandra. This is shared by all services as we couldn't do a vertical scaling for microservices at DB level as the services are tightly coupled conceptually.
We want vertical scaling at GemFire end by creating different Regions for different contexts.
For example, a BOX table in Cassandra will be updated by Region Box-Receive(receive context) and Region Box-Transfer(transfer context) via CacheWriter.
Our problem is how to maintain data sync between these two Regions?
Please suggest any other approach also for separation at GemFire end.
gemfire version-
<dependency>
<groupId>com.gemstone.gemfire</groupId>
<artifactId>gemfire</artifactId>
<version>8.2.6</version>
</dependency>
One alternative approach, since you are using Spring Boot would be to do the following:
First annotation your #SpringBootApplication class with #EnableGemfireCacheTransactions...
Example:
#SpringBootApplication
#EnableGemfireCacheTransactions
#EnableGemfireRepositories
class YourSpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(YourSpringBootApplication.class, args);
}
...
}
The #EnableGemfireCacheTransactions annotation enables Spring Data GemFire's GemfireTransactionManager, which integrates GemFire's CacheTransactionManager with Spring Transaction Management infrastructure which then allows you to do this...
Now, just annotate your #Service application component transactional service methods with core Spring's #Transactional annotation, like so...
#Service
class YourBoxReceiverTransferService {
#Transactional
public <return-type> update(ReceiveContext receiveContext,
TransferContext transferContext {
...
receiveContextRepository.save(receiveContext);
transferContextRepository.save(transferContext);
...
}
}
As you can see here, I also used Spring Data (GemFire's) Repository infrastructure to manage the persistence operations (e.g. CRUD), which will be used appropriately in the transactional scoped-context setup by Spring.
2 advantages with the Spring approach, over using GemFire's public API, which unnecessarily couples you to GemFire (a definite code smell, particularly in a Spring context), is...
You don't have to place a bunch of boilerplate, crap code in to your application components, which does not belong there!
Using Spring's Transaction Management infrastructure, it is extremely easy to change your Transaction Management Strategy, such as by switching from GemFire's local-only cache transactions, to say, Global, JTA-based Transactions if the need every arises (such as, oh, well, now I need to send a message over a JMS message queue after the GemFire Region's and Cassandra BOX Table are updated to notify some downstream process that the Receiver/Transfer context has been updated). With Spring's Transaction Management infrastructure, you do not need to change a single line of application code to change transaction management strategies (such as local to global, or global to local, etc).
Hope this helps!
-John
You can use transactions. Something like this should work:
txMgr = cache.getTransactionManager();
txMgr.begin();
boxReceive.put();
...
boxtransfer.put();
txMgr.commit();
This will work provided you co-locate the box-receive and the box-transfer region and use the same key, or use a PartitionResolver to colocate the data.
Imagine you creating a Spring library which provides a service component for some remote service. The service component wants to cache response data internally, for which Spring caching is a very good fit. Also imagine the cache needs to be slightly more advanced than any of the default ones (timeouts, maximum sizes, etc), so the library provides a cache manager to create it. However, you don't want the third-party cache manager to suddenly be responsible for all caching used in the project where the library is included (the project might have it's own caches).
The behaviour I am observing is that if the project is using caching configured using a simple application.properties (with let's say ehcache - see example below), the cache manager provided by the component gets called to create all caches, no matter how I structure the code. Is this happening because the project haven't provided any cache manager of it's own?
Is it not possible to have caching like this in a library-provided service without the project being involved? It's very important for the use-case that the library can provide the cache without interfering with project caching.
Sample service cache configuration:
#Configuration
public class SpringAceClientCacheConfiguration
{
#Bean
public CacheManager serviceCacheManager()
{
return new GuavaCacheManager("service-data") {
#Override
public Cache getCache(String name) {
Logger.getLogger(getClass().getName()).info("Creating new cache for " + name + "...");
return new GuavaCache(name, CacheBuilder.newBuilder().build());
}
};
}
}
Sample project application.properties:
spring.cache.jcache.config=ehcache3.xml
Sample project ehcache3.xml:
<config xmlns='http://www.ehcache.org/v3'
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jsr107="http://www.ehcache.org/v3/jsr107"
xsi:schemaLocation="http://www.ehcache.org/v3 http://www.ehcache.org/schema/ehcache-core-3.0.xsd
http://www.ehcache.org/v3/jsr107 http://www.ehcache.org/schema/ehcache-107-ext-3.0.xsd">
<cache alias="content">
<heap unit="entries">4096</heap>
<jsr107:mbeans enable-statistics="true"/>
</cache>
</config>
I see the logging call for both the 'service-data' and the 'content' caches. The service cache configuration only cares about it's own service data cache. Should I not be able to provide a cache manager just for this cache without the project having to declare a separate one with #Primary (which I believe might work, but I haven't tried it yet)?
Thanks for any help!
There is no immediate notion of "qualified" CacheManager so when you use the annotation model, you are expected to provide one CacheManager bean that is managing your complete cache infrastructure.
However, the cache abstraction has a CacheResolver SPI. So, your third party lib could require one:
#Cacheable(cacheResolver = "requiredCacheResolver")
public Foo someMethod(String id)
Then the requiredCacheResolver bean can get the cache information from whatever CacheManager you like. The third party lib could provide an implementation that takes a CacheManager or individual caches as a parameter.
I am not sure that I would recommend that, however. If you are using caching with a third party library, you should define in your documentation the caches that you require (name and semantic so that expiration can be configured accordingly). Then, each user of your library should configure those caches in their infrastructure. In the end, you'll have to do that anyway and hiding that from the user does not seem a good idea.
We are currently developing an application intended for deployment on a WebSphere server. The application should use an in-house Service Provider, that provides access to services implemented as remote EJBs. The Service Provider bean has some hard-coded jndi-names to use.
Now during development we are using Tomee and in general all is working nicely. All except one thing:
The ServiceProvider does a jndi-lookup of "cell/persistent/configService". Now I tried to create a mock ear that contains mock EJBs for these services. I am able to deploy them, and I am able to access them from the application using jndi-names like: "java:global/framework-mock-ear-1.0.0-SNAPSHOT/framework-mock-impl/ConfigServiceMock" but it seems to be impossible to access them using a jndi lookup of: "cell/persistent/configService" ... now I added an openejb-jar.xml file to my mock implementation containing:
<openejb-jar>
<ejb-deployment ejb-name="ConfigServiceMock">
<jndi name="cell/persistent/configService"
interface="de.thecompany.common.services.config.ConfigService"/>
</ejb-deployment>
</openejb-jar>
And I can see during startup, that the bean seems to be registered correctly under that name:
INFORMATION: Jndi(name=cell/persistent/configService) --> Ejb(deployment-id=ConfigServiceMock)
But I have now idea how to make the other ear be able to access this bean using that name.
The Service Provider part is given and we are not able to change this at all, so please don't suggest to change the hard-coded jndi names. We surely would like to do so, but are not able to change anything.
Ok ... to I wasted quite some time on this. Until I finally came up with a solution. Instead of configuring Tomee and OpenEJB to find my beans, I hijacked the InitialContext and rewrote my queries.
package de.mycompany.mock.tomee;
import org.apache.naming.java.javaURLContextFactory;
import javax.naming.Context;
import javax.naming.NamingException;
import java.util.Hashtable;
public class MycompanyNamingContextFactory extends javaURLContextFactory {
private static Context initialContext;
#Override
public Context getInitialContext(Hashtable environment) throws NamingException {
if(initialContext == null) {
Hashtable childEnv = (Hashtable) environment.clone();
childEnv.put("java.naming.factory.initial", "org.apache.naming.java.javaURLContextFactory");
initialContext = new MycompanyInitialContext(childEnv);
}
return initialContext;
}
}
By setting the system property
java.naming.factory.initial=de.mycompany.mock.tomee.MycompanyNamingContextFactory
I was able to inject my MycompanyInitialContext context implementation:
package de.mycompany.mock.tomee;
import org.apache.openejb.core.ivm.naming.IvmContext;
import org.apache.openejb.core.ivm.naming.NameNode;
import javax.naming.NamingException;
import java.util.Hashtable;
public class MycomanyInitialContext extends IvmContext {
public MycomanyInitialContext(Hashtable<String, Object> environment) throws NamingException {
super(environment);
}
#Override
public Object lookup(String compositName) throws NamingException {
if("cell/persistent/configService".equals(compositName)) {
return super.lookup("java:global/mycompany-mock-ear-1.0.0-SNAPSHOT/mycompany-mock-impl/ConfigServiceMock");
}
if("cell/persistent/authorizationService".equals(compositName)) {
Object obj = super.lookup("java:global/mycompany-mock-ear-1.0.0-SNAPSHOT/mycompany-mock-impl/AuthServiceMock");
return obj;
}
return super.lookup(compositName);
}
}
I know this is not pretty and if anyone has an idea how do make this easier and prettier, I'm all ears and this solution seems to work. As it's only intended on simulating production services during development, this hack doesn't induce any nightmares for me. Just thought I'd post it, just in case someone else stumbles over something similar.
I know this answer is coming a few years after the question, but a simpler solution would be to simply set the system property as follows (say in catalina.properties):
java.naming.initial.factory=org.apache.openejb.core.OpenEJBInitialContextFactory
This allows you to lookup the ejb by the name you set, and the one that shows in tomee logs during startup, eg your 'cell/persistent/configService' from
INFORMATION: Jndi(name=cell/persistent/configService) --> Ejb(deployment-id=ConfigServiceMock)
With the system property set you can lookup the ejb the way you would want
final Context ctx = new InitialContext();
ctx.lookup("cell/persistent/configService")
The OpenEJBInitialContextFactory allows access to local EJBs as well as container resources.
If you didn't want to set the system property (as it would affect all applications in the tomee) you could still use the factory setting it the 'standard' way:
Properties properties = new Properties();
properties.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.apache.openejb.core.OpenEJBInitialContextFactory");
final Context ctx = new InitialContext(properties);
ctx.lookup("cell/persistent/configService");
And of course you could still look them up using the global "java:global/" as well with that factory.