The registration of a ManagedServiceFactory follows can be done as :-
private ServiceRegistration factoryService;
public void start(BundleContext context) {
Dictionary props = new Hashtable();
props.put("service.pid", "test.smssenderfactory");
factoryService = context.registerService(ManagedServiceFactory.class.getName(),
new SmsSenderFactory(), props);
}
How can one do this in a blueprint(Sample example would be highly useful) ?
Apache aries has support for Managed services. You can find examples in the integration tests of Apache Aries blueprint at https://svn.apache.org/repos/asf/aries/trunk/blueprint/blueprint-itests/src/test/resources/ManagedServiceFactoryTest.xml. In this case can change the code of your SMSSenderFactory not to implement ManagedServiceFactory directly as blueprint does it for you.
You can also embed your code (that you gave as a sample in your question) into the init-method of your bean if that was your question. In that case the ManagedServiceFactory should be unregistered in the destroy method of the bean.
Third option is to leave blueprint out of the game. You already have a ManagedServiceFactory implementation, why don't you simply register it in the Activator of the Bundle?
Fourth option is to use Declarative Services with one of the annotation set. In that case Metatype information will be also generated and you can configure your SMSSender via the webconsole.
I suggest you to use the last option.
Related
Spring Integration's FtpInboundFileSynchronizer allows for the setting of a Comparator<FTPFile> to allow ordering of the downloads. The documentation says:
Starting with version 5.1, the synchronizer can be provided with a Comparator. This is useful when restricting the number of files fetched with maxFetchSize.
This is fine for #Bean configuration:
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer(...)
FtpInboundFileSynchronizer synchronizer = new FtpInboundFileSynchronizer(sessionFactory);
...
synchronizer.setComparator(comparator);
return synchronizer;
}
But if I want to programatically assemble flows, the Java DSL is encouraged.
StandardIntegrationFlow flow = IntegrationFlows
.from(Ftp.inboundAdapter(ftpFileSessionFactory, comparator)
.maxFetchSize(1)
...
The comparator in the Ftp.inboundAdapter(...) factory method is only for the comparison of files locally, after they have been downloaded. There are configuration settings that get passed to the synchronizer here (like remote directory, timestamp, etc.). But there is no setting for the synchronizer equivalent to setting it above.
Solution attempt:
The alternative is to create the synchronizer as non-bean, create the FtpInboundFileSynchronizingMessageSource in a similar way, and use IntegrationFlows.from(source) to assemble the synchronizer results in a runtime exception when the flow is registered with the flow context:
Creating EvaluationContext with no beanFactory
java.lang.RuntimeException: No beanFactory
at org.springframework.integration.expression.ExpressionUtils.createStandardEvaluationContext(ExpressionUtils.java:90) ~[spring-integration-core-5.3.2.RELEASE.jar:5.3.2.RELEASE]
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.afterPropertiesSet(AbstractInboundFileSynchronizer.java:299) ~[spring-integration-file-5.3.2.RELEASE.jar:5.3.2.RELEASE]
That makes sense; the FtpInboundFileSynchronizer is not supposed to be constructed outside of a context. (Though this does appear to work.) But how, in that case, can I dynamically assemble ftp integration flows with a synchronizer configured with a Comparator<FTPFile>?
Looks like we have missed to expose that remoteComparator option in DSL.
Feel free to raise a GH issue or even contribute a fix: https://github.com/spring-projects/spring-integration/issues
As a workaround for dynamic flows, I really would suggest to go a separate FtpInboundFileSynchronizer and FtpInboundFileSynchronizingMessageSource and then use the mentioned IntegrationFlows.from(source). What you probably miss in your configuration is this API:
/**
* Add an object which will be registered as an {#link IntegrationFlow} dependant bean in the
* application context. Usually it is some support component, which needs an application context.
* For example dynamically created connection factories or header mappers for AMQP, JMS, TCP etc.
* #param bean an additional arbitrary bean to register into the application context.
* #return the current builder instance
*/
IntegrationFlowRegistrationBuilder addBean(Object bean);
I mean that FtpInboundFileSynchronizingMessageSource is OK to pass to the from() as a is, but synchronizer has to be added as an extra bean for registration.
Another more fancy way is to consider to use a new feature called DSL extensions: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/dsl.html#java-dsl-extensions
So, you can extend that FtpInboundChannelAdapterSpec to provide a missed option to configure for an internal synchronizer.
I have develop a new Connector. This connector requires to be configured with two parameters, lets say:
default_trip_timeout_milis
default_trip_threshold
Challenge is, I want read ${myValue_a} and ${myValue_a} from an API, using an HTTP call, not from a file or inline values.
Since this is a connector, I need to make this API call somewhere before connectors are initialized.
FlowVars aren't an option, since they are initialized with the Flows, and this is happening before in the Mule app life Cycle.
My idea is to create an Spring Bean implementing Initialisable, so it will be called before Connectors are init, and here, using any java based libs (Spring RestTemplate?) , call API, get values, and store them somewhere (context? objectStore?) , so the connector can access them.
Make sense? Any other ideas?
Thanks!
mmm you could make a class that will create the properties in the startup and in this class obtain the API properties via http request. Example below:
public class PropertyInit implements InitializingBean,FactoryBean {
private Properties props = new Properties();
#Override
public Object getObject() throws Exception {
return props;
}
#Override
public Class getObjectType() {
return Properties.class;
}
}
Now you should be able to load this property class with:
<context:property-placeholder properties-ref="propertyInit"/>
Hope you like this idea. I used this approach in a previous project.
I want to give you first a strong warning on doing this. If you go down this path then you risk breaking your application in very strange ways because if any other components depend on this component you are having dynamic components on startup, you will break them, and you should think if there are other ways to achieve this behaviour instead of using properties.
That said the way to do this would be to use a proxy pattern, which is a proxy for the component you recreate whenever its properties are changed. So you will need to create a class which extends Circuit Breaker, which encapsulates and instance of Circuit Breaker which is recreated whenever its properties change. These properties must not be used outside of the proxy class as other components may read these properties at startup and then not refresh, you must keep this in mind that anything which might directly or indirectly access these properties cannot do so in their initialisation phase or your application will break.
It's worth taking a look at SpringCloudConfig which allows for you to have a properties server and then all your applications can hot-reload those properties at runtime when they change. Not sure if you can take that path in Mule if SpringCloud is supported yet but it's a nice thing to know exists.
I am creating a new plugin containing CustomService which is intended to replace an existing service from an existing plugin. Following the pattern found in custom security implementations and shown here, I've added the configuration to the resources.groovy, oldService(path.to.new.CustomService). I've also tried adding all injected classes into the closure for this service.
(Actual service names are RegistrationPersonRegistrationCompositeService and NewRegistrationPersonRegistrationCompositeService in code block)
I dont want the original application code to have any reference to the new plugin. However, BuildConfig at the application level will require plugin.location entry. My resource.groovy mods are in the new plugin. I have not had success in this endeavor. Am I modifying the wrong resources.groovy? If this change is required in the original application code, I've lost the ability to leave the original code unaltered. I'm not extending the original Service nor using override annotation. My intent is to replace the service (Spring bean) on start-up. The new plugin has a dependency on the old plugin in an attempt to manage order of operations in loading these classes.
Does it matter that the old service is previously injected in a controller? this would require me to override the controller in the new plugin in the same fashion and inject the correct service for desired behavior?
I've found documentation showing that within a plugin, the resources.groovy will be ignored. Also, building the resources.groovy into a war is problematic. I have not found a solution. I'm getting no error that I can share, just that the desired behavior is missing; the original service is handling the requests.
//was resource.groovy - now renamed to serviceOverRide.groovy - still located in \grails-app\conf\spring of plugin
//tried this with and without the BeanBuilder. Theory: I'm missing the autowire somehow
import org.springframework.context.ApplicationContext
import grails.spring.BeanBuilder
def bb = new BeanBuilder()
bb.beans {
registrationPersonRegistrationCompositeService(path.to.services.registration.NewRegistrationPersonRegistrationCompositeService) { bean ->
bean.autowire = true
registrationRestrictionCompositeService = ref("registrationRestrictionCompositeService")
registrationPersonTermVerificationService = ref("registrationPersonTermVerificationService")
}
classRegistrationController(path.to.services.registration.ClassRegistrationController) { bean ->
bean.autowire = true
selfServiceLookupService = ref("selfServiceLookupService")
registrationPersonRegistrationCompositeService = ref("registrationPersonRegistrationCompositeService")
}
}
ApplicationContext appContext = bb.createApplicationContext()
Additional information: Added the following lines to the PluginGrailsPlugin.groovy. The original service is still handling these requests
def dependsOn = ['appPersonRegistration': '1.0.20 > *']
List loadAfter = ['appPersonRegistration']
def doWithSpring = {
registrationPersonCourseRegistrationCompositeService(path.to.new.registration.TccRegistrationPersonCourseRegistrationCompositeService)
}
def doWithApplicationContext = { applicationContext ->
SecurityContextHolder.setStrategyName(SecurityContextHolder.MODE_INHERITABLETHREADLOCAL)
DefaultListableBeanFactory beanFactory = (DefaultListableBeanFactory) applicationContext.getBeanFactory()
beanFactory.registerBeanDefinition("registrationPersonCourseRegistrationCompositeService", BeanDefinitionBuilder.rootBeanDefinition(TccRegistrationPersonCourseRegistrationCompositeService.class.getName()).getBeanDefinition())
}
I highly recommend you read the section of the documentation on Plugins. The reason why I recommend this is because plugins:
Do not include, or make use of resources.groovy
Provide a means through doWithSpring to effect the spring application
Following the information in the documentation you should have no issue overriding the service in the application context.
You must implement your changes to the application context using doWithSpring this is the key to solving your issues.
In this implementation, I had a utility method in a service for which I was attempting to provide an override. Problem is, the Aspect works as a proxy and must override a method that is called directly from another class. In my classRegistrationController, I was calling service processRegistration() which in turn called applyRules(). Example-only method names used. Since the service was calling its own utility, there was no opportunity for the proxy/wrapper to circumvent the call to applyRules(). Once this was discovered, I refactored the code in this fashion: Controller calls processRegistration as it always had. After returning, another call is made to the service, processLocalRules(). The new method is an empty placeholder intended to be overridden by the client's custom logic. The plugin with Aspect works now using resources.groovy. I prefer the doWithSpring as Joshua explained for this reason: my intent to get the plugin to work without modification to the original app-config; otherwise resource.groovy is a valid approach. Upvoting Joshua's answer as it does satisfy the requirement and is cleaner. Thanks!
I am creating an application that runs in Karaf as OSGi container, and uses the OSGi HTTP Service and Jersey for exposing REST APIs. I need to add SAML2 authentication and permissions-based authorization. I would like to use the annotation based approach in Shiro for this, as spring seems to be moving away from OSGi. My questions:
Is Shiro with SAML jars a good fit in OSGi environments?
I want to use WSO2 as the identity provider. Are there any caveats of Shiro and WSO2 working together?
For using annotations, the Shiro docs indicate I need to put AspectJ/Spring/Guice jars - Is this still valid in OSGi environments? I would prefer Guice for all my DI needs.
Would be great to have some insights from Shiro users.
UPDATE
I'm using this project: osgi-jax-rs-connector. So, I use Guice-Peaberry to register OSGi services with the interfaces annotated with #Path or #Provider, and the tool takes care of converting them into a REST resource. (Similar to pax-whiteboard?). I was planning to similarly expose my filters as OSGi services, and then dynamically add them along with the resources.
I have had headaches with AspectJ in OSGi in a previous project where I had to switch to vanilla Equinox from Karaf because the equinox weaving hook was not agreeing with Karaf (stack traces from Aries were seen, among other things). So, would doing something like shiro-jersey be better?
I'm sure it is doable, though I already see some restrictions/issues poping up.
for
1) haven't tried it, though you need to make sure that you tell the pax-web and jetty about it, it'll require adding this to the jetty.xml and it might even need to add a fragment bundle to pax-web-jetty so the desired class can be loaded. This will most likely be your first classnotfound issue.
2) don't know of WSO2 so no idea
3) if you want to use annotations, be careful. For Guice you'll mostlikely will need to use Peaberry since afaik Guice isn't "OSGi-fied" yet. Using AspectJ isn't really a good idea in a OSGi environment due to the classloader restrictions. If you have a compile-time weaving it should be fine, but run-time weaving will be a challange.
UPDATE:
Completely forgot about it, but there is a Pax Shiro Project available, maybe this can be a good starting point to get your setup in a correct lineup.
In the interest of readers, I'm sharing the solution I arrived at after some research of existing tools. First, the easy part: Using Shiro annotations in an OSGi environment. I ended up writing the below class since most Shiro-Jersey adapters shared by developers is based on Jersey 1.x.
#Provider
public class ShiroAnnotationResourceFilter implements ContainerRequestFilter {
private static final Map, AuthorizingAnnotationHandler> ANNOTATION_MAP = new HashMap, AuthorizingAnnotationHandler>();
#Context
private ResourceInfo resourceInfo;
public ShiroAnnotationResourceFilter() {
ANNOTATION_MAP.put(RequiresPermissions.class,
new PermissionAnnotationHandler());
ANNOTATION_MAP.put(RequiresRoles.class, new RoleAnnotationHandler());
ANNOTATION_MAP.put(RequiresUser.class, new UserAnnotationHandler());
ANNOTATION_MAP.put(RequiresGuest.class, new GuestAnnotationHandler());
ANNOTATION_MAP.put(RequiresAuthentication.class,
new AuthenticatedAnnotationHandler());
}
public void filter(ContainerRequestContext context) throws IOException {
Class resourceClass = resourceInfo.getResourceClass();
if (resourceClass != null) {
Annotation annotation = fetchAnnotation(resourceClass
.getAnnotations());
if (annotation != null) {
ANNOTATION_MAP.get(annotation.annotationType())
.assertAuthorized(annotation);
}
}
Method method = resourceInfo.getResourceMethod();
if (method != null) {
Annotation annotation = fetchAnnotation(method.getAnnotations());
if (annotation != null) {
ANNOTATION_MAP.get(annotation.annotationType())
.assertAuthorized(annotation);
}
}
}
private static Annotation fetchAnnotation(Annotation[] annotations) {
for (Annotation annotation : annotations) {
if (ANNOTATION_MAP.keySet().contains(annotation.annotationType())) {
return annotation;
}
}
return null;
}
}
The complete project is here.
The above took care of Part 3 of my question.
For Shiro with SAML, I am using the Servicemix wrapped openSAML jar, and it seems to be working okay till now. I did however had to write a bit of code to make Shiro work with SAML2. It's almost on the same lines as shiro-cas, but is a bit more generic to be used with other IdPs. The code is kind of big so sharing a link to the project instead of copying classes to SO. It can be found here.
Now that I have some abstraction between my code and my IdP, WSO2 integration looks a bit simpler.
P.S. Thanks Achim for your comments and suggestions.
Neil Bartlett's article http://njbartlett.name/2010/07/19/factory-components-in-ds.html shows the way to set config for bundles without using managed service or managed factory.
Search for examples of actually setting the config for this method either point to felix file install or to examples using managed service.
In answer to the question OSGi Declarative Services vs. ManagedService for configuring service? Neil Bartlett states "Note that DS never actually creates a ManagedService or ManagedServiceFactory for your component. It works by listening to Config Admin with a ConfigurationListener. However the internal details are unimportant... simply create configs with PID/factoryPID matching the component.name and it "just works"
I think the technique involves placing a pid entry in the config dictionary but I have no idea how this would be used with config admin.
A guide or simple example of how to set the configuration using this method would be very helpful.
I know it is some time since the question was asked, but I ran into the same problem when trying to create a ManagedServiceFactory-like Component with Declarative Services. So I want to share my solution. Maybe others find it useful. My problem was like this:
I have defined a component (annotated with #Component). On each configuration I add using felix file-install, I want an instance of that component created with the given configuration and activated immediately.
First I tried messing with the properties factory and configurationPid of #Component, but all that is not needed and even returns wrong results (felix annotation processor in the maven plugin seems to have a bug when handling configurationPid).
The solution I came up with:
package com.example.my;
#Component(
name = MyExampleComponent.FACTORY_PID,
configurationPolicy = ConfigurationPolicy.REQUIRE,
property = {"abc=", "exampleProp="}
)
public class MyExampleComponent {
public static final String FACTORY_PID = "com.example.my.component";
#Activate
protected void activate(BundleContext context, Map<String,Object> map) {
// ...
}
}
Then I created a config file for felix file-install named com.example.my.component-test1.cfg:
abc = Hello World
exampleProp = 123
When deployed this automatically creates a folder structure in the configuration folder like com/example/my/component containing the files:
factory.config
contents:
factory.pid="com.example.my.component"
factory.pidList=[ \
"com.example.my.component.525ca4fb-2d43-46f3-b912-8765f639c46f", \
]
.
525ca4fb-2d43-46f3-b912-8765f639c46f.config
contents:
abc="Hello World"
exampleProp="123"
felix.fileinstall.filename="file:/..._.cfg"
service.factoryPid="com.example.my.component"
service.pid="com.example.my.component.525ca4fb-2d43-46f3-b912-8765f639c46f"
The 525ca4fb-2d43-46f3-b912-8765f639c46f seems to be some randomly generated ID (possibly UUID).