I am using Apache Felix for osgi. Other bundles are running just fine. I've add new bundle to the reactor. The Activator.init() method is called. But I will never get into the updated() method. Any ideas?
public class Activator extends DependencyActivatorBase implements ManagedServiceFactory {
private static final Logger logger = LoggerFactory.getLogger(Activator.class);
public static final String PID = "my.unique.pid";
private final Map<String, Component> components = new HashMap<>();
private volatile DependencyManager dependencyManager; /* injected by dependency manager */
#Override
public void init(BundleContext bc, DependencyManager dm) throws Exception {
Properties props = new Properties();
props.put(Constants.SERVICE_PID, PID);
dm.add(createComponent()
.setInterface(ManagedServiceFactory.class.getName(), props)
.setImplementation(this)
.add(createConfigurationDependency().setPid(PID))
);
dm.add(createComponent()
.setInterface(SessionRegister.class.getName(), null)
.setImplementation(SessionRegisterImpl.class)
);
dm.add(createComponent()
.setInterface(Plugin.class.getName(), null)
.setImplementation(PriorityActionHandler.class)
.add(createServiceDependency().setRequired(true).setService(PluginManager.class))
);
}
#Override
public void updated(String pid, Dictionary<String, ?> properties) throws ConfigurationException {
logger.debug("This method should be called and run!");
if (properties == null) {
logger.warn("Configuration is empty!");
return;
}
.
.
.
}
The init method is likely not what you intended. You create a component that is a ManagedServiceFactory, with a PID, and you also make that same component have a service dependency (which translates to making it a ManagedService, with the same PID, which is not allowed and definitely confusing). I am assuming you meant either of these two.
The updated method you have right now assumes you wanted to be a ManagedServiceFactory and there updated will be invoked for each configuration (one or more) that you provide. From your code I cannot see if you have an implementation of Configuration Admin installed and if you actually somehow provide one or more configurations for this PID.
Please provide more information to further pinpoint this issue if this answer does not help you yet.
Related
I'm trying to declare multiple SFTP sessions, wrap them in a DelegatingSessionFactory, then later use SftpRemoteFileTemplate.execute(...) during a cron job.
On the execute part of things, the code is very simple, it is already used for a single session, but I want to expand it to multiple possible sessions.
Below I extended my single session code. I just copied the methods for reference. At the end I'll show how I think the new methods should look.
public class XSession extends SftpSession {
#Scheduled(cron = "${sftp.scan.x.schedule}")
void scan() {
List<FileHistoryEntity> fileList = template.execute(this::processFiles);
...
}
private List<FileHistoryEntity> processFiles(Session<ChannelSftp.LsEntry> session) {
List.of(session.list(this.remoteDir)).forEach(file -> doWhatever());
...
}
}
But now I have multiple sessions. So I declare the following class:
#Slf4j
#Configuration
#RequiredArgsConstructor
public class DelegateSftpSessionHandler {
private final SessionFactory<ChannelSftp.LsEntry> session1;
private final SessionFactory<ChannelSftp.LsEntry> session2;
private final SessionFactory<ChannelSftp.LsEntry> session3;
private final SessionFactory<ChannelSftp.LsEntry> session4;
private final SessionFactory<ChannelSftp.LsEntry> session5;
#RequiredArgsConstructor
public enum DelegateSessionConfig {
SESSION_1("IN_REALITY_A_RELEVANT_NAME_1");
SESSION_2("IN_REALITY_A_RELEVANT_NAME_2");
SESSION_3("IN_REALITY_A_RELEVANT_NAME_3");
SESSION_4("IN_REALITY_A_RELEVANT_NAME_4");
SESSION_5("IN_REALITY_A_RELEVANT_NAME_5");
public final String threadKey;
}
#Bean
#Primary
public DelegatingSessionFactory<ChannelSftp.LsEntry> delegatingSessionFactory() {
Map<Object, SessionFactory<ChannelSftp.LsEntry>> sessionMap = new HashMap<>();
sessionMap.put(DelegateSessionConfig.SESSION_1.threadKey, session1);
sessionMap.put(DelegateSessionConfig.SESSION_2.threadKey, session2);
sessionMap.put(DelegateSessionConfig.SESSION_3.threadKey, session3);
sessionMap.put(DelegateSessionConfig.SESSION_4.threadKey, session4);
sessionMap.put(DelegateSessionConfig.SESSION_5.threadKey, session5);
DefaultSessionFactoryLocator<ChannelSftp.LsEntry> sessionLocator = new DefaultSessionFactoryLocator<>(sessionMap);
return new DelegatingSessionFactory<>(sessionLocator);
}
#Bean
SftpRemoteFileTemplate ftpRemoteFileTemplate(DelegatingSessionFactory<ChannelSftp.LsEntry> dsf) {
return new SftpRemoteFileTemplate(dsf);
}
}
Ting is, I have no idea how any of this works, and the spring sftp / fpt documentation is by no means clear. The code is virtually undocumented. And I'm just guessing. I think that I have to do the following:
public class XSession extends SftpSession {
#Autowire
DelegatingSessionFactory<ChannelSftp.LsEntry> delegatingSessionFactory;
#Autowired
SftpRemoteFileTemplate template;
#Scheduled(cron = "${sftp.scan.x.schedule}") // x == SESSION_1
#Async // for thread key
void scan() {
delegatingSessionFactory.setThreadKey(DelegateSessionConfig.SESSION_1.threadKey);
// because thread key changes the session globally? So I don't need specify
// which session this template is working with???
List<FileHistoryEntity> fileList = template.execute(this::processFiles);
...
delegatingSessionFactory.clearThreadKey();
}
private List<FileHistoryEntity> processFiles(Session<ChannelSftp.LsEntry> session) {
List.of(session.list(this.remoteDir)).forEach(file -> doWhatever());
...
}
}
I'm basing what I'm saying on the following link, github spring integration test
Honestly, I hardly understand what is happening. But it seems like setting the thread key, changes the session globally.
My only other idea is to just ... create the RemoteFileTemplate on demand
public static SftpRemoteFileTemplate getTemplateFor(DelegatingSessionFactory<ChannelSftp.LsEntry> dsf, DelegateSessionConfig session) {
return new SftpRemoteFileTemplate(dsf.getFactoryLocator().getSessionFactory(session.threadKey));
}
It does not set it globally. That's how a ThreadLocal variable works: you set a value in some thread and only this thread can see it. If you use the same object concurrently, other threads don't see that value because it does not belong to their thread state.
Not sure what is your concern, but pattern to extend an SftpSession for custom logic is not right. You should consider to use an SftpRemoteFileTemplate.execute(SessionCallback<F, T> callback) instead, but thread key must be set into a DelegatingSessionFactory before anyway and in the same thread you going to call that execute().
These two codes should do exactly the same thing, but the first one works and the second one doesnt work. Can anyone review the code and give the details about why the code failed during second approach.
The first code :
#Component
public class AdminSqlUtil implements SqlUtil {
#Autowired private ApplicationContext context;
DataSource dataSource =(DataSource) context.getBean("adminDataSource");
public void runSqlFile(String SQLFileName) {
Resource resource = context.getResource(SQLFileName);
EncodedResource encodedResource = new EncodedResource(resource, Charset.forName("UTF-8"));
try {
ScriptUtils.executeSqlScript(dataSource.getConnection(), encodedResource);
} catch (SQLException ex) {
throw new RuntimeException(ex);
}
}
The second code :
#Component
public class AdminSqlUtil implements SqlUtil {
#Autowired private ApplicationContext context;
public void runSqlFile(String SQLFileName) {
Resource resource = context.getResource(SQLFileName);
EncodedResource encodedResource = new EncodedResource(resource, Charset.forName("UTF-8"));
try {
ScriptUtils.executeSqlScript((DataSource)context.getBean("adminDataSource").getConnection(), encodedResource);
} catch (SQLException ex) {
throw new RuntimeException(ex);
}
}
The first one has a private scope and the framework can not access it. You could have add #inject before your private scope variable so the framework can initialize it. However the best practice is to define a public dependency setter for that to work.
The second one on the other hand initiates the value at the start, which is not a dependency injection by the way. I am not talking about good and bad practice. It is wrong. We don’t initialize a variable which is suppose to be initialized by the framework.
So lets go with the first one, Try to add a setter for it.
Take a look at this link.
I have a service that requires a configuration
#Component(service=InstrumenterService.class ,configurationPid = "InstrumenterService", configurationPolicy = ConfigurationPolicy.REQUIRE, scope = ServiceScope.PROTOTYPE)
public class InstrumenterService
This service is referenced inside another service :
#Component(service = SampleService.class, scope = ServiceScope.PROTOTYPE)
public class SampleService {
#Reference(cardinality = ReferenceCardinality.OPTIONAL, scope = ReferenceScope.PROTOTYPE_REQUIRED, policyOption = ReferencePolicyOption.GREEDY)
InstrumenterService coverageInstrumenter;
public boolean hasInstrumenter() {
if(coverageInstrumenter == null)
return false;
return true;
}
}
This SampleService is used inside a Main class hooked to the main osgi thread.
I'm using ComponentServiceObjects as I want to create on demand SampleServices.
#Component(immediate = true, property = "main.thread=true")
public class Main implements Runnable {
#Reference
ConfigurationAdmin cfgAdm;
#Reference(scope = ReferenceScope.PROTOTYPE_REQUIRED)
private ComponentServiceObjects<SampleService> sampleServices;
public void run() {
if (cfgAdm != null) {
Configuration configuration;
try {
configuration = cfgAdm.getConfiguration("InstrumenterService", "?");
Hashtable<String, Object> props = new Hashtable<>();
props.put("some_prop", "some_value");
configuration.update(props);
} catch (IOException e1) {
e1.printStackTrace();
}
}
SampleService servicess = sampleServices.getService();
System.out.println(servicess.hasInstrumenter());
}
}
The problem I have is that the configuration set by the ConfigurationAdmin is not visible in the InstrumenterService unless I put a Thread.sleep(500); command after calling the configuration.update.
I'm not really confortable using a Thread.sleep command to ensure the configuration update is visible.
Is there an API to check that the configuration has been updated and is available to use ?
Thanks to Neil I was able to find a workable solution.
I used a ServiceTracker after the configuration was set to wait for the service:
BundleContext bundleContext = FrameworkUtil.getBundle(getClass()).getBundleContext();
ServiceTracker serviceTracker = new ServiceTracker(bundleContext, InstrumenterService.class.getName(), null);
serviceTracker.open();
try {
serviceTracker.waitForService(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
serviceTracker.close();
The reason I needed ConfigurationAdmin in the first place is because there is an interface IInstrumenter which can be implemented by many different classes.
The name of this instrumenter is set in the ConfigurationAdmin and then further on in other services the required instrumeter service is fetch "automagically".
This way any number of instrumenter could be added to the application and only the name of the instrumeter needs to be known in order for it to be used.
I want to mention also that with OSGI we managed to split our monolith legacy application in more modules (~15) and they do not depend directly on each other but use an API layer.
Thanks again for the good job you are doing with OSGI.
As clarified in the comments, this code is not exactly realistic. In production code there is not normally a requirement to update a configuration record and then immediately obtain a service published by a component. This is because any such code makes too many assumptions about the effect of the configuration update.
A call to getServiceReference and getService returns only a snapshot of the service registry state at a particular instant. It is inherently unreliable to call getService expecting it to return a value.
In reality, we always use a pattern where we react to being notified of the existence of the service. This can be done in various ways, including ServiceListener and ServiceTracker, but the simplest is to write a component with a reference, e.g.:
#Component
public class MyComponent {
#Reference
SampleService service;
public void doSomething() {
println(service.hasInstrumenter());
}
}
This component has a mandatory reference to SampleService and will only be activated only when an instance of SampleService is available.
I've got some code like this to read a value that could be set either with a sling:OsgiConfig node or after being set in the Felix UI...
#Component(immediate = true, metatype = true, label = "Dummy Service")
public class DummyService {
#Property(label = "Dummy Service Value")
public static final String DUMMY_VALUE = "dummyValue";
private static String m_strDummyValue = "default value";
public static String getDummyValue(){
return m_strDummyValue;
}
#Activate
protected void activate(ComponentContext context) {
configure(context.getProperties());
}
#Deactivate
protected void deactivate(ComponentContext context) {
}
#Modified
protected void modified(ComponentContext componentContext) {
configure(componentContext.getProperties());
}
public void updated(Dictionary properties) throws ConfigurationException {
configure(properties);
}
private void configure(Dictionary properties) {
m_strDummyValue = OsgiUtil.toString(properties.get(DUMMY_VALUE), null);
}
}
And could be called in any consuming class as
DummyService.getDummyValue();
This is currently working in our development environment. It's also very similar to some code that another vendor wrote and is currently in production in the client environment, and seems to be working. However, I ran across this post OSGi component configurable via Apache Felix... which recommends against using a static accessor like this. Are there potential problems where getDummyValue() could return an incorrect value, or is the recommendation more about being philosophically consistent with OSGi's patterns?
Generally statics are frowned upon especially in OSGi as it involves a tight code coupling. It would be better to have DummySerivce be an interface and your class implement it with the component being a service. Then others would reference your component's service. Once injected with the service, they can call the service's methods.
You shouldn't do this for one major reason: there is no guarantee that DummyService has been configured when you access the static method - in contrast with a service reference.
I have an app that uses the "task:scheduler" and "task:scheduled-tasks" elements (the latter containing "task:scheduled" elements). This is all working fine.
I'm trying to write some code that introspects the "application configuration" to get a short summary of some important information, like what tasks are scheduled and what their schedule is.
I already have a class that has a bunch of "#Autowired" instance variables so I can iterate through all of this. It was easy enough to add a "List" to get all of the TaskScheduler objects. I only have two of these, and I have a different set of scheduled tasks in each of them.
What I can't see in those TaskScheduler objects (they are actually ThreadPoolTaskScheduler objects) is anything that looks like a list of scheduled tasks, so I'm guessing the list of scheduled tasks is recorded somewhere else.
What objects can I use to introspect the set of scheduled tasks, and which thread pool they are in?
This functionality will be available in Spring 4.2
https://jira.spring.io/browse/SPR-12748 (Disclaimer: I reported this issue and contributed code towards its solution).
// Warning there may be more than one ScheduledTaskRegistrar in your
// application context. If this is the case you can autowire a list of
// ScheduledTaskRegistrar instead.
#Autowired
private ScheduledTaskRegistrar scheduledTaskRegistrar;
public List<Task> getScheduledTasks() {
List<Task> result = new ArrayList<Task>();
result.addAll(this.scheduledTaskRegistrar.getTriggerTaskList());
result.addAll(this.scheduledTaskRegistrar.getCronTaskList());
result.addAll(this.scheduledTaskRegistrar.getFixedRateTaskList());
result.addAll(this.scheduledTaskRegistrar.getFixedDelayTaskList());
return result;
}
// You can this inspect the tasks,
// so for example a cron task can be inspected like this:
public List<CronTask> getScheduledCronTasks() {
List<CronTask> cronTaskList = this.scheduledTaskRegistrar.getCronTaskList();
for (CronTask cronTask : cronTaskList) {
System.out.println(cronTask.getExpression);
}
return cronTaskList;
}
If you are using a ScheduledMethodRunnable defined in XML:
<task:scheduled method="run" cron="0 0 12 * * ?" ref="MyObject" />
You can access the underlying target object:
ScheduledMethodRunnable scheduledMethodRunnable = (ScheduledMethodRunnable) task.getRunnable();
TargetClass target = (TargetClass) scheduledMethodRunnable.getTarget();
I have a snippet for pre spring 4.2 since it is still sitting at release candidate level.
The scheduledFuture interface is implemented by every runnable element in the BlockingQueue.
Map<String, ThreadPoolTaskScheduler> schedulers = applicationContext
.getBeansOfType(ThreadPoolTaskScheduler.class);
for (ThreadPoolTaskScheduler scheduler : schedulers.values()) {
ScheduledExecutorService exec = scheduler.getScheduledExecutor();
ScheduledThreadPoolExecutor poolExec = scheduler
.getScheduledThreadPoolExecutor();
BlockingQueue<Runnable> queue = poolExec.getQueue();
Iterator<Runnable> iter = queue.iterator();
while (iter.hasNext()) {
ScheduledFuture<?> future = (ScheduledFuture<?>) iter.next();
future.getDelay(TimeUnit.MINUTES);
Runnable job = iter.next();
logger.debug(MessageFormat.format(":: Task Class is {0}", JobDiscoverer.findRealTask(job)));
}
Heres a reflective way to get information about which job class is in the pool as threadPoolNamePrefix didn't return a distinct name for me:
public class JobDiscoverer {
private final static Field syncInFutureTask;
private final static Field callableInFutureTask;
private static final Class<? extends Callable> adapterClass;
private static final Field runnableInAdapter;
private static Field reschedulingRunnable;
private static Field targetScheduledMethod;
static {
try {
reschedulingRunnable = Class
.forName(
"org.springframework.scheduling.support.DelegatingErrorHandlingRunnable")
.getDeclaredField("delegate");
reschedulingRunnable.setAccessible(true);
targetScheduledMethod = Class
.forName(
"org.springframework.scheduling.support.ScheduledMethodRunnable")
.getDeclaredField("target");
targetScheduledMethod.setAccessible(true);
callableInFutureTask = Class.forName(
"java.util.concurrent.FutureTask$Sync").getDeclaredField(
"callable");
callableInFutureTask.setAccessible(true);
syncInFutureTask = FutureTask.class.getDeclaredField("sync");
syncInFutureTask.setAccessible(true);
adapterClass = Executors.callable(new Runnable() {
public void run() {
}
}).getClass();
runnableInAdapter = adapterClass.getDeclaredField("task");
runnableInAdapter.setAccessible(true);
} catch (NoSuchFieldException e) {
throw new ExceptionInInitializerError(e);
} catch (SecurityException e) {
throw new PiaRuntimeException(e);
} catch (ClassNotFoundException e) {
throw new PiaRuntimeException(e);
}
}
public static Object findRealTask(Runnable task) {
if (task instanceof FutureTask) {
try {
Object syncAble = syncInFutureTask.get(task);
Object callable = callableInFutureTask.get(syncAble);
if (adapterClass.isInstance(callable)) {
Object reschedulable = runnableInAdapter.get(callable);
Object targetable = reschedulingRunnable.get(reschedulable);
return targetScheduledMethod.get(targetable);
} else {
return callable;
}
} catch (IllegalAccessException e) {
throw new IllegalStateException(e);
}
}
throw new ClassCastException("Not a FutureTask");
}
With #Scheduled based configuration the approach from Tobias M’s answer does not work out-of-the-box.
Instead of autowiring a ScheduledTaskRegistrar instance (which is not available for annotation based configuration), you can instead autowire a ScheduledTaskHolder which only has a getScheduledTasks() method.
Background:
The ScheduledAnnotationBeanPostProcessor used to manage #Scheduled tasks has an internal ScheduledTaskRegistrar that’s not available as a bean. It does implement ScheduledTaskHolder, though.
Every Spring XML element has a corresponding BeanDefinitionReader. For <task:scheduled-tasks>, that's ScheduledTasksBeanDefinitionParser.
This uses a BeanDefinitionBuilder to create a BeanDefinition for a bean of type ContextLifecycleScheduledTaskRegistrar. Your scheduled tasks will be stored in that bean.
The tasks will be executing in either a default TaskScheduler or one you provided.
I've given you the class names so you can look at the source code yourself if you need more fine grained details.