guice - select different providers while running - osgi

I want to change a provider I'm using at runtime without having to stop the JVM. For example, this isn't exactly what I'm trying to do, but the idea is the same: Say, I want to switch from Amazon S3 to Google Cloud storage in the middle of a running application.
Is that something I can do within guice?
I would have to have all jars available at runtime and configure all modules at startup. Then, later once the application is started, I'd have to use a provider that can determine which instance to inject # startup and later on when it changes.
Or, would it be better just to restart the application after updating a configuration and the system would then proceed with that configuration and if it needs to change, the application would again need to be restarted.
Would OSGI help here?

You don't need anything extra: Guice can do it out-of-the-box. But... you'll have to use Providers instead of the direct instance.
In your module
bind(Cloud.class)
.annotatedWith(Names.named("google"))
.to(GoogleCloud.class);
bind(Cloud.class)
.annotatedWith(Names.named("amazon"))
.to(AmazonCloud.class);
bind(Cloud.class)
.toProvider(SwitchingCloudProvider.class);
Somewhere
class SwitchingCloudProvider implements Provider<Cloud> {
#Inject #Named("google") Provider<Cloud> googleCloudProvider;
#Inject #Named("amazon") Provider<Cloud> amazonCloudProvider;
#Inject Configuration configuration; // used as your switch "commander"
public Cloud get() {
switch(configuration.getCloudName()) {
case "google": return googleCloudProvider.get();
case "amazon": return amazonCloudProvider.get();
default:
// Whatever you want, usually an exception.
}
}
}
Or in a provider method in your module
#Provides
Cloud provideCloud(
#Named("google") Provider<Cloud> googleCloudProvider,
#Named("amazon") Provider<Cloud> amazonCloudProvider,
Configuration configuration) {
switch(configuration.getCloudName()) {
case "google": return googleCloudProvider.get();
case "amazon": return amazonCloudProvider.get();
default:
// Whatever you want, usually an exception.
}
}
Usage
class Foo {
#Inject Provider<Cloud> cloudProvider; // Do NOT inject Cloud directly or you won't get the changes as they come up.
public void bar() {
Cloud cloud = cloudProvider.get();
// use cloud
}
}

Related

Integration testing of a Spring Cloud application with the AWS Parameter Store

How to perform integration testing of a Spring Boot application reading properties from the AWS Parameter Store (dependency org.springframework.cloud:spring-cloud-starter-aws-parameter-store-config).
Should the AWS Parameter Store integration be disabled in integration tests?
How to use local server (or mock) instead of the real AWS Parameter Store in integration tests?
Usually integration with the AWS Parameter Store should be disabled in integration tests for simplicity and performance. Instead, load test properties from a file (e.g., src/test/resources/test.properties)
#SpringBootTest(properties = "aws.paramstore.enabled=false")
#TestPropertySource("classpath:/test.properties")
public class SampleTests {
//...
}
If individual tests need to check integration with the AWS Parameter Store use Testcontainers and LocalStack an easy-to-use local AWS cloud stack for Docker.
Add a configuration class creating custom ssmClient bean of type AWSSimpleSystemsManagement configured to use LocalStack instead of a default one declared in org.springframework.cloud.aws.autoconfigure.paramstore.AwsParamStoreBootstrapConfiguration using the real AWS Parameter Store.
#Configuration(proxyBeanMethods = false)
public class AwsParamStoreBootstrapConfiguration {
public static final LocalStackContainer AWS_SSM_CONTAINER = initContainer();
public static LocalStackContainer initContainer() {
LocalStackContainer container = new LocalStackContainer().withServices(SSM);
container.start();
Runtime.getRuntime().addShutdownHook(new Thread(container::stop));
return container;
}
#Bean
public AWSSimpleSystemsManagement ssmClient() {
return AWSSimpleSystemsManagementClientBuilder.standard()
.withEndpointConfiguration(AWS_SSM_CONTAINER.getEndpointConfiguration(SSM))
.withCredentials(AWS_SSM_CONTAINER.getDefaultCredentialsProvider())
.build();
}
}
As far as AwsParamStorePropertySourceLocator is a loaded by a Spring Cloud "bootstrap" context, you need to add a configuration class to the bootstrap context by adding to the file src/test/resources/META-INF/spring.factories the following entry
org.springframework.cloud.bootstrap.BootstrapConfiguration=\
com.example.test.AwsParamStoreBootstrapConfiguration
The same approach can be used for mocking ssmClient using Mockito.

Spring boot test: Wait for microservices schedulder task

I'm trying to test a service which's trying to communicate with other one.
One of them generates auditories which are stored on memory until an scheduled task flushs them on a redis node:
#Component
public class AuditFlushTask {
private AuditService auditService;
private AuditFlushTask(AuditService auditService) {
this.auditService = auditService;
}
#Scheduled(fixedDelayString = "${fo.audit-flush-interval}")
public void flushAudits() {
this.auditService.flush();
}
}
By other hand, this service provide an endpoint stands for providing those flushed auditories:
public Collection<String> listAudits(
) {
return this.boService.listRawAudits(deadlineTimestamp);
}
The problem is I'm building an integration test in order to check if this process works right, I mean, if audits are well provided.
So, I don't know how to "wait until audits has been flushed on microservice".
Any ideas?
Don't test the framework: Spring almost certainly has tests which test fixed delays.
Instead, keep all logic within the service itself, and integration test that in isolation from the Spring #Scheduled function.

How to prevent a property beeing set on certain profile

we want to implement a feature for enabling a user to choose a role in the system, by sending the role he/she wishes to have in the login request.
this feature is meant for testing (creating test-users or assigning roles to existing ones is "impossible" in the customers system) and, of course, should never be deployed to a production environment.
I want to deployment of my application to fail if the property feature.choose-role is set to true AND the spring active profile is set to production.
As we are using springs config-server features, i also want to application to completely stop working if the property is set to true at runtime.
My first attempt was to simply create this Config:
#Configuration
public class FeatureToggleGuardConfig {
#Bean
#RefreshScope
#ConditionalOnProperty(value = "feature.choose-roles", havingValue = "true")
#Profile("production")
public Object preventDeploymentOfRoleChoosingFeatureOnProduction() {
throw new RuntimeException("feature.choose-roles must not be true in production profile!");
}
}
This works if the property is set to true at deployment, but as i understand, will only attempt to refresh the bean if someone actually tries to use it - which will never happen.
Also - i don't think that it would stop the whole application if this just threw a runtime exception when it is used.
in short:
I want to prevent my application to run (or keep running) if at any time, the property feature.choose-roles is true and the active profile is "production".
I do not want to alter production code in order to do this ( if(feature is enables && profile is production) etc.)
Perhaps instead of having a your profile drive some sort of blocker, you can have your profile drive a config bean which says whether or not to use the feature. Then, have the nonProd config read from your property, and have the prod config always return false.
Something like:
public interface ChooseRolesConfig {
boolean allowChoosingRoles();
}
#Profile("!production")
public class NonProdChooseRolesConfig implements ChooseRolesConfig {
#Value("${feature.choose-roles}")
boolean chooseRoles;
#Override
public boolean allowChoosingRoles() {
return chooseRoles;
}
}
#Profile("production")
public class ProdChooseRolesConfig implements ChooseRolesConfig {
#Override
public boolean allowChoosingRoles() {
return false;
}
}
and then just autowire a ChooseRolesConfig object, and call the method, and regardless of what you change feature.choose-roles to using cloud config, it should always be false for prod.
Disclaimer: I blindly wrote this so it might be missing some annotations or something but hopefully you get the idea

Spring Data Redis - #Transactional support on Repository

We're using spring-boot-starter-parent 1.4.1 together with spring-boot-starter-redis and spring-boot-starter-data-redis. We use redis for (a) message passing to an external app and (b) to store some information in a repository. Our redis config looks like this
#Configuration
#EnableRedisRepositories
open class RedisConfig {
#Bean // for message passing
#Profile("test")
open fun testRedisChannelProvider(): RedisParserChannelProvider {
return RedisParserChannelProvider("test_parser:parse.job", "test_parser:parse.joblist")
}
#Bean // for message passing
#Profile("!test")
open fun productionRedisChannelProvider(): RedisParserChannelProvider {
return RedisParserChannelProvider("parser:parse.job", "parser:parse.joblist")
}
#Bean // for message passing
open fun parseJobTemplate(connectionFactory: RedisConnectionFactory): RedisTemplate<String, ParseJob> {
val template = RedisTemplate<String, ParseJob>()
template.connectionFactory = connectionFactory
template.valueSerializer = Jackson2JsonRedisSerializer<ParseJob>(ParseJob::class.java)
return template
}
//#Bean // for message passing
//open fun parseJobListTemplate ...
// no template for repository
With this config the message passing is working nicely as well as writing to/reading from the repository. Now I am trying to get #Transactional working for communication with the repository, but I have not succeeded so far. I already followed the example config in the docs and manually enabled transaction support on it:
#Bean
open fun redisTemplate(): RedisTemplate<*, *> {
val template = RedisTemplate<ByteArray, ByteArray>()
template.setEnableTransactionSupport(true)
return template
}
...but this is apparently not the way to go. Currently, everything written to the repository (in particular during tests) stays there.
#Transactional use of Redis repositories is not possible, and I doubt it will work at all.
The reason behind is how Spring Data Redis repository support works:
RedisKeyValueAdapter relies on results of write and read operations that are issued while persisting an object.
Redis transactions behave more like deferred batches, so it's not possible to wrap Redis repository support inside a transaction but require a different approach and impose several limitations.

Persistence in an OSGi environment using Datanucleus JDO and blueprints

I am trying to do persistence in an OSGi environment (Karaf running Felix) with as much modularity as possible. I chose JDO for its added features (mainly fetch groups) instead of JPA. The implementation being Datanucleus. I use Maven to build the whole project.
As I didn't have any prior experience with JDO or OSGi it was quite a challenge to make either of them work. I am presently able to do JDO persistence in a Java SE environment (unit tests work without a problem) and I know how to provide services in an OSGi environment using the blueprint container. But I am not able to make those two things work together. I am having classloading issues.
I was not able to build even a simple application that would be able to do JDO persistence on Karaf (I tried following this tutorial but it uses Spring DM and I was unable to rewrite it to use OSGi blueprint instead).
What I am most confused about is:
What value should I set the datanucleus.primaryClassLoader property to?
What class loader to pass as an argument to the JDOHelper.getPersistenceManagerFactory method?
What packages to explicitly import using the maven-bundle-plugin? (looks like at least javax.jdo, org.datanucleus.api.jdo and org.osgi.framework might be required)
What do the other bundles need besides a reference to PersistenceManagerFactory?
Additionally:
Is it possible to separate the persistence info from the value classes? If I understand it correctly, that would only be possible if using runtime enhancement which would be very complicated if at all doable.
Is it possible to define interdependent persistence capable classes in multiple bundles? Such as having Users defined in one bundle and their Addresses in another?
I would be extremely grateful for an example of a simple multi-bundle project that takes care of persistence using only Datanucleus, JDO API and OSGi blueprint.
Thank you
I can only provide some basic hints about getting JDO/datanucleus to work on top of Karaf.
As pointed in the tutorial, you'll need to extend the LocalPersistenceManagerFactoryBean, implementing as well the BundleContextAware interface.
The key point here is classloading: the LocalPersistenceManagerFactoryBean expects all classes to be loaded by one single classloader, which isn't the case in a OSGi runtime.
In order to get it working you'll need to:
Explicitly import the org.datanucleus.api.jdo in your manifest file.
The datanucleus.primaryClassLoader property could be set to the same classloader you'll pass to the JDOHelper.getPersistenceManagerFactory method. The classloader is the one used by the org.datanucleus.api.jdo bundle (see example below)
You'll need to set the datanucleus.plugin.pluginRegistryClassName property to org.datanucleus.plugin.OSGiPluginRegistry.
When stopping/uninstalling your bundle, you'll have to refresh the javax.jdo bundle for avoiding errors when re-creating the persistence manager factory (check this question on the subject)
Sample custom LocalPersistenceManagerFactoryBean:
public class OSGiLocalPersistenceManagerFactoryBean
extends LocalPersistenceManagerFactoryBean implements BundleContextAware {
public static final String JDO_BUNDLE_NAME = "org.datanucleus.api.jdo";
public static final String JDO_PMF_CLASS_NAME = "org.datanucleus.api.jdo.JDOPersistenceManagerFactory";
private BundleContext bundleContext;
#Override
protected PersistenceManagerFactory newPersistenceManagerFactory(String name) {
return JDOHelper.getPersistenceManagerFactory(name, getClassLoader());
}
#Override
protected PersistenceManagerFactory newPersistenceManagerFactory(Map props) {
ClassLoader classLoader = getClassLoader();
props.put("datanucleus.primaryClassLoader", classLoader);
if (FrameworkUtil.getBundle(this.getClass()) != null) { // running in OSGi
props.put("datanucleus.plugin.pluginRegistryClassName", "org.datanucleus.plugin.OSGiPluginRegistry");
}
PersistenceManagerFactory pmf = JDOHelper.getPersistenceManagerFactory(props, classLoader);
return pmf;
}
private ClassLoader getClassLoader() {
ClassLoader classLoader = null;
Bundle thisBundle = FrameworkUtil.getBundle(this.getClass());
if (thisBundle != null) { // on OSGi runtime
Bundle[] bundles = bundleContext.getBundles();
for (Bundle bundle : bundles) {
if (JDO_BUNDLE_NAME.equals(bundle.getSymbolicName())) {
try {
classLoader = bundle.loadClass(JDO_PMF_CLASS_NAME).getClassLoader();
} catch (ClassNotFoundException e) {
// do something fancy here ...
}
break;
}
}
} else { // on Java runtime
classLoader = this.getClass().getClassLoader();
}
return classLoader;
}
#Override
public void setBundleContext(BundleContext bundleContext) {
this.bundleContext = bundleContext;
}
}

Resources