I am looking for way to track multiple OSGI services without the hassle of creating many ServiceTrackers.
Something like that:
library.track(Service1.class, Service2.class, Service3.class, new Tracker(){
void servicesAdded(Service1 s1, Service2 s2, Service3 s3){
//do something with all 3 services
}
void servicesRemoved(){
//one of the services is not available, no operation possible
}
});
You can not track more than one service using just basic OSGi APIs. The easiest way to achieve this is to create a declarative services component with some mandatory service references and immediate=true. The #Activate method of this service will be called when all mandatory references are present.
#Component(immediate=true)
public class MyComponent {
#Reference
ServiceA sa;
#Reference
ServiceB sb;
#Reference
ServiceC sc;
#Activate
public void activate() {
}
}
Related
I'm currently working on a web application using Angular 7 for front-end, spring-boot for back-end (in which I'm developing a restful web service).
I'm using #Autowired annotation to inject my services into each other and my rest controller. The problem is that in some of my services, there are some attributes that get shared when the injection is done. How do I prevent that?
import org.springframework.stereotype.Service;
import org.springframework.beans.factory.annotation.Autowired;
#Service
public class ServiceA {
private boolean test;
public ServiceA (){
test = true;
}
public changeValues(){
test = false;
}
}
#Service
public class ServiceB {
#Autowired
private ServiceA serivceA;
public void method1() {
serviceA.changeValues();
}
}
#Service
public class ServiceC {
#Autowired
private ServiceA serivceA;
public void method2(){
if(serviceA.getTest()){
doSomethingNeeded();
}
}
}
public class Application{
#Autowired
private ServiceB b;
#Autowired
private ServiceC c;
public static void main(String[] args) {
b.method1();
c.method2();
}
}
In this case the method doSomethingNeeded() in ServiceC won't be able to be executed because the ressource 'test' of ServiceA is shared between both ServiceB and ServiceC. How do I prevent That?
P.S. In my case, the dependency injections are way too complex for applying any modifications to the services, that's why I need a way to configure spring-ioc dependency injection in a way to create instances of private attributes to each client session.
Spring Beans are by default Singletons and the should not contain state.
Single Page Applications (like you create with Angular) should anyway hold the state on the client side and pass the information with every request.
The main reason is when your backend is stateless it's easy to scale and is more reliable because if a backend service is restarted you don't loose anything.
You just need change the scope of ServiceA to prototype by adding #Scope(scopeName = "prototype"):
#Scope(scopeName = "prototype")
#Service
public class ServiceA {
}
Then when ServiceB and ServiceC are instantiated , separated ServiceA will be created and inject into them.
P.S. Please note that a new instance of prototype bean will only be created during ServiceB and ServiceC are instantiated. It does not mean that a new instance of prototype bean will always be created whenever you access them. You need to use one of these technique if you want such behaviour.
These days im working on a Web project and i just want to clarify couple of things regarding Spring bean scopes and best practices for Spring based developments. Here i am using a scenario using a sample code
I have a Web Controller as below
#Controller
Public class JobController{
private JobService jobService;
#Autowired
public void setJobService(JobService jobService ) {
this.jobService = jobService ;
}
public void run(){
Job job = new Job();
-- Setting the properties for the Object
jobService.run(job);
}
}
Then I have the Service as below
#Service
Public class JobService {
public void run(Job job){
-- perform the business logic
}
}
In Here i want to make the JobService class stateless so i can define JobService as singleton hence reduce the unnecessary object creation. As per my understanding in-order make a class stateless we do not want to keep instance properties.In This scenario i pass different Job objects to the service. Does this make this JobService statefull because JObservice process different different job objects? Can you please help me to understand
Thanks,
Keth
Passing different objects does not make your service stateful.
Consider this for example.
#Service
Public class JobService {
private Job currentJob;
public void setJob(Job job) {
currentJob = job;
}
public void run(){
-- perform the business logic on currentJob
}
}
This would make the bean 'stateful' and cause unexplained behavior.
The execution of the method in your singleton by multiple controller/threads will not collide and can be assumed to be safe.
I'm new to Spring.
I'm working on a library project which depends on spring-context.
#Scope(value = "##?")
#Service
public class MyService {
#PostConstruct private void constructed() {
}
#PreDestroying private void destroying() {
resource.clear();
}
public void doSome() throws IOException {
// try{}finally{} is not the case
resource = getSome();
doSome(resource); // may throw an IOException
resource.clear();
}
private transient MyResource resource;
}
I want to free the resource in every time this instance being destroyed.
According to #Scope, there four options that I can choose.
ConfigurableBeanFactory.SCOPE_SINGLETON
ConfigurableBeanFactory.SCOPE_PROTOTYPE
WebApplicationContext.SCOPE_REQUEST
WebApplicationContext.SCOPE_SESSION
I found that WebApplicationContext is not available from my dependency tree. (I'm not depends on spring-webmvc)
I'm planning to choose ConfigurableBeanFactory.SCOPE_PROTOTYPE.
Is it true that the scope I choose will make MyService safe? I mean any two or more clients can't be injected with the same service instance? Will the Spring container take care of it?
Indeed, Request, Session, Global-session and Application scopes are only available within Web aware application context.
Singleton (single instance per Spring container) is a default scope used by Spring, so using prototype scope will guarantee that new instance will be created and returned to the client, so yes Prototype is what you need in this case.
I like the the ability to use constructors to add dependencies. Especially autowiring those dependencies.
e.g.
public class MyClass {
private final Dependency dependency;
#Autowired
public MyClass(#Qualifier("bean-id") Dependency dependency) {
this.dependency = dependency;
}
}
What I'm finding is that the Spring Cloud AWS framework throws an "InstantiationException" if the "Dependency" class above happens to be a class which is passed to a Workflow worker and is missing a default, empty constructor.
concrete example:
public class MyClass {
private final DependencyWorkflowClientExternalFactory clientFactory;
#Autowired
public MyClass(#Qualifier("bean-id") DependencyWorkflowClientExternalFactory clientFactory) {
this.clientFactory = clientFactory;
}
}
public class WorkflowInitializer {
#Autowired
private WorkflowWorker workflowWorker; //assume wired with correct credentials
public WorkflowInitialiser() {
init();
}
public init() {
workflowWorker.addWorkflowImplementationType(MyClass.class);
}
}
the above fails with:
java.lang.InstantiationException: com.mypackage.MyClass
at java.lang.Class.newInstance(Class.java:359)
I have to do something like:
public class MyClass {
#Autowired
#Qualifier("bean-id")
private Dependency dependency;
public MyClass() {
}
}
The question is:
Is it possible in the current release of the Spring Cloud framework to use the #Autowire annotation on a constructor? Is it a requirement that the annotation is added to the instance field?
I ask (and assume "yes") because Workflow workers take a class types, rather than instantiations of objects for their implementations of workflows.
As a secondary question:
Why do ActivityWorkers take instances of an object but WorkflowWorkers take classes?
This question is actually not related to Spring Cloud AWS but AWS Flow Framework for Java.
You are using the "default" WorkflowWorker which instantiates workflow implementations as POJOs. Therefore your class MyClass is not created as a Spring bean but as a POJO.
You should use the SpringWorkflowWorker provided by the AWS Flow Framework for a better integration with Spring. For more information, have a look at the AWS Flow Framework documentation explaining the integration with Spring.
I'm trying to initialize some components in my Jersey application in the Application constructor (the thing that inherits from ResourceConfig) . It looks like this
public Application(#Context ServletContext context,
#Context ServiceLocator locator)...
When I try to use the locator at any point, I still can't create instances of things that I have registered in an AbstractBinder using the locator.create(MyThing.class) method.
I'm certain that they are bound correctly because they are injected properly into my resource classes via the #inject field annotation.
The difference is that the Jersey/HK2 framework is instantiating my resource classes (as expected, since they're in my package scan path), but I can not seem to leverage the ServiceLocator through code.
My ultimate goal is to have other non-jersey classes injected when they have the #Inject attribute, eg. I have a worker class that needs to be injected with the configured database access layer. I want to say
locator.Create(AWorker.class)
and have it injected.
How do I get the real ServiceLocator that will inject everything I've already registered/bound with my Binder? (Or should I be using something other than ServiceLocator?)
I am going to assume you are starting up a servlet and have a class extending org.glassfish.jersey.server.ResourceConfig and your bindings are correctly registered (e.g. using a Binder and registerInstances). If you then want to access the ServiceLocator in order to perform additional initialization, you have two choices:
One approach is to register a ContainerLifecycleListener (as seen here in this post):
// In Application extends ResourceConfig constructor
register(new ContainerLifecycleListener() {
#Override
public void onStartup(final Container container) {
// access the ServiceLocator here
final ServiceLocator serviceLocator = container.getApplicationHandler().getInjectionManager().getInstance(ServiceLocator.class);
// Perform whatever with serviceLocator
}
#Override
public void onReload(final Container container) {
/* ... */}
#Override
public void onShutdown(final Container container) {
/* ... */}
});
The second approach is to use a Feature, which can also be auto-discovered using #Provider:
#Provider
public final class StartupListener implements Feature {
private final ServiceLocator sl;
#Inject
public ProvisionStartupListener(final ServiceLocator sl) {
this.sl = sl;
}
#Override
public boolean configure(final FeatureContext context) {
// Perform whatever action with serviceLocator
return true;
}
How are you starting up your container? If you are using ApplicationHandler, you can just call:handler.getServiceLocator(). The ServiceLocator is, indeed, what you want to be using to access your dependencies.
If you are starting up a servlet, I found that the best way to get access to the service locator was to have a Jersey feature set it on my startup class:
private static final class LocatorSetFeature implements Feature {
private final ServiceLocator scopedLocator;
#Inject
private LocatorSetFeature(ServiceLocator scopedLocator) {
this.scopedLocator = scopedLocator;
}
#Override
public boolean configure(FeatureContext context) {
locator = this.scopedLocator; // this would set our member locator variable
return true;
}
}
The feature would just be registered with our resource config with config.register(new LocatorSetFeature()).
It would be important to tie in startup of other components based on the lifecycle of your container, so this still feels a bit hacky. You might consider adding those classes as first class dependencies in the HK2 container and simply injecting the appropriate dependencies into your third party classes (using a Binder, for example).