Is there a simply way to list of consumer components for an Interface? - osgi

Fellow coders,
I'm currently trying to find a simple and concise way to get a listing of of Services/Components that use a given Interface. I'm using the gogo-shell of a running Liferay 7.1.x server and can't seem to find an easy and direct way to to just that.
We want to override references to the used service via OSGI-configuration, but first need to find all components using it.
As there are static reluctant references to the service component, simply providing an alternative with a higher ranking is not a viable solution.
Here are the gogo related bundles I'm using:
35|Active | 6|Apache Felix Gogo Command (1.0.2)|1.0.2
36|Active | 6|Apache Felix Gogo Runtime (1.1.0.LIFERAY-PATCHED-2)|1.1.0.LIFERAY-PATCHED-2
72|Active | 6|Apache Felix Gogo Shell (1.1.0)|1.1.0
542|Active | 10|Liferay Foundation - Liferay Gogo Shell - Impl (1.0.13)|1.0.13
543|Active | 10|Liferay Gogo Shell Web (2.0.25)|2.0.25
So far I've been able to list all providers of an interface via se (interface=com.liferay.saml.runtime.servlet.profile.WebSsoProfile):
{com.liferay.saml.runtime.profile.WebSsoProfile, com.liferay.saml.runtime.servlet.profile.WebSsoProfile}={service.id=6293, service.bundleid=79, service.scope=bundle, component.name=com.liferay.saml.opensaml.integration.internal.servlet.profile.WebSsoProfileImpl, component.id=3963}
"Registered by bundle:" de.haufe.leong.com.liferay.saml.opensaml.integration [79]
"Bundles using service"
com.liferay.saml.web_2.0.11 [82]
com.liferay.saml.impl_2.0.12 [78]
See all bundle requirements via: inspect cap service:
com.liferay.saml.impl_2.0.12 [78] requires:
...
service; com.liferay.saml.runtime.profile.WebSsoProfile, com.liferay.saml.runtime.servlet.profile.WebSsoProfile provided by:
de.haufe.leong.com.liferay.saml.opensaml.integration [79]
...
But listing the actual services from within these bundles that use the given interface (or the service component) has eluded me so far.
The only solution I see so far is listing all provided services of these bundles via scr:list bid and then check each service with scr:info componentId to see if it uses the WebSsoProfile service.
Do you guys know a faster way to find the services using the WebSsoProfile-service?
[EDIT]: We solved the problem without having to provide config overrides for all consumers of the WebSsoProfile service but rather ensure that our implementation is used by deactivating the default service on Server startup. You can see the approach described here.
Anyways for debugging purposes this kind of lookup would be very useful.
So if anyone knows a way to retrieve the list of all consumers of an interface then please post your solution!

The standard solution is using the inspect command. It has a special namespace for services. Since a service registration is a capability, you can use inspect capability service:
g! inspect c service
org.apache.felix.framework [0] provides:
----------------------------------------
service; org.osgi.service.resolver.Resolver with properties:
service.bundleid = 0
service.id = 1
service.scope = singleton
service; org.osgi.service.packageadmin.PackageAdmin with properties:
service.bundleid = 0
service.id = 2
service.scope = singleton
service; org.osgi.service.startlevel.StartLevel with properties:
service.bundleid = 0
service.id = 3
service.scope = singleton
....
However, I find this command seriously useless. The command is inflexible and has a horrible output.
However, Gogo is way more powerful than people know. For one, you can use all the methods on the bundle context.
g! servicereferences org.osgi.service.startlevel.StartLevel null
000003 0 StartLevel
If you want to see the properties of each service:
g! each (servicereferences org.osgi.service.startlevel.StartLevel null) { $it properties }
[service.id=3, objectClass=[Ljava.lang.String;#4acd14d7, service.scope=singleton, service.bundleid=0]
You can make this into a built-in function:
g! srv = { servicereferences $1 null }
servicereferences $1 null
g! srv org.osgi.service.startlevel.StartLevel
000003 0 StartLevel
Unfortunately, the OSGi added an overloaded method in the Bundle Context for getServiceReferences() that throws an NPE when called with null. Gogo is awful with overloaded methods :-(
However, it is trivial to add your own command with a declarative service component. You could use the following:
#GogoCommand(scope="service", function="srv")
#Component(service=ServiceCommand.class)
public class ServiceCommand {
#Activate
BundleContext context;
#Descriptor("List all services")
public ServiceReference<?>[] srv() throws InvalidSyntaxException {
return context.getAllServiceReferences(null, null);
}
#Descriptor("List all services that match the name")
public ServiceReference<?>[] srv(
String... names)
throws InvalidSyntaxException {
ServiceReference<?>[] allServiceReferences =
context.getAllServiceReferences(null,null);
if ( allServiceReferences==null)
return new ServiceReference[0];
return Stream.of(allServiceReferences)
.filter(r -> {
String[] objectClass = (String[]) r.getProperty(Constants.OBJECTCLASS);
for (String oc : objectClass) {
for (String name : names)
if (oc.contains(name))
return true;
}
return names.length == 0;
}).sorted().toArray(ServiceReference[]::new);
}
}
This adds the srv command to Gogo:
g! srv Help Basic
000004 1 Basic
000005 1 Inspect
Update If you want to find which bundles are using a specific service, you could use:
g! each (srv X) { $it usingbundles }
Make sure you got the following dependencies on your classpath:
-buildpath: \
org.osgi.service.component.annotations,\
org.apache.felix.gogo.runtime, \
org.osgi.framework

Related

How can I use S3InboundFileSynchronizer to synchronize an S3Bucket organized with directories?

I'm trying to use the S3InboundFileSynchronizer to synchronize an S3Bucket to a local directory. The bucket is organised with sub-directories such as:
bucket ->
2016 ->
08 ->
daily-report-20160801.csv
daily-report-20160802.csv
etc...
Using this configuration:
#Bean
public S3InboundFileSynchronizer s3InboundFileSynchronizer() {
S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(amazonS3());
synchronizer.setDeleteRemoteFiles(true);
synchronizer.setPreserveTimestamp(true);
synchronizer.setRemoteDirectory("REDACTED");
synchronizer.setFilter(new S3RegexPatternFileListFilter(".*\\.csv$"));
Expression expression = PARSER.parseExpression("#this.substring(#this.lastIndexOf('/')+1)");
synchronizer.setLocalFilenameGeneratorExpression(expression);
return synchronizer;
}
I'm able to get as far as connecting to the bucket and listing its contents. When it comes time to read from the bucket the following exception is thrown:
org.springframework.messaging.MessagingException: Problem occurred while synchronizing remote to local directory; nested exception is
org.springframework.messaging.MessagingException: Failed to execute on session;
nested exception is
java.lang.IllegalStateException: 'path' must in pattern [BUCKET/KEY].
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:266)
Reviewing the code it seems that it'd be impossible to ever synchronize an S3Bucket w/ sub-directories:
private String[] splitPathToBucketAndKey(String path) {
Assert.hasText(path, "'path' must not be empty String.");
String[] bucketKey = path.split("/");
Assert.state(bucketKey.length == 2, "'path' must in pattern [BUCKET/KEY].");
Assert.state(bucketKey[0].length() >= 3, "S3 bucket name must be at least 3 characters long.");
bucketKey[0] = resolveBucket(bucketKey[0]);
return bucketKey;
}
Is there some configuration I'm missing or is this a bug?
(I'm assuming it's a bug 'till I'm told otherwise so I've submitted a pull request with a proposed fix.)
Yes, it is a bug and submitted PullRequest is good for fix.
Only the solution as a workaround is like custom SessionFactory<S3ObjectSummary> which returns a custom S3Session extension with the provided fix in the PR.

elasticsearch can't resolve environment variables in elasticsearch.yml

I have the following two settings in my elasticsearch.yml file. They are the only ones that pull from environment variables.
cloud.aws.access_key: ${AWS_ACCESS_KEY_ID}
cloud.aws.secret_key: ${AWS_SECRET_KEY}
When I restart elasticsearch to load these from the environment, I get an error that it can't resolve them. I've tested it and it will not resolve either, so this error applies to both (it just fails on the bottom one first)
- IllegalArgumentException[Could not resolve placeholder 'AWS_SECRET_KEY']
java.lang.IllegalArgumentException: Could not resolve placeholder 'AWS_SECRET_KEY'
at org.elasticsearch.common.property.PropertyPlaceholder.parseStringValue(PropertyPlaceholder.java:124)
at org.elasticsearch.common.property.PropertyPlaceholder.replacePlaceholders(PropertyPlaceholder.java:81)
at org.elasticsearch.common.settings.ImmutableSettings$Builder.replacePropertyPlaceholders(ImmutableSettings.java:1060)
at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareSettings(InternalSettingsPreparer.java:101)
at org.elasticsearch.bootstrap.Bootstrap.initialSettings(Bootstrap.java:106)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:177)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
I did some investigating through elasticsearch's code repository on github and discovered this bit of code that pulls from the environment variables.
ImmutableSettings.java#resolvePlaceholder from elasticsearch#github
Namely. the lines inside that function that should be pulling from the environment variables are these one:
Code from resolvePlaceholder that pulls out environment variables
However, after resolvePlaceholder is run from inside function PropertyPlaceholder#parseStringValue, the System.getenv call must be returning null as that is the only way for that error to be thrown.
I wrote a simple test program that is essentially a copy of ImmutableSettings.java#resolvePlaceholder to test that System.getenv was pulling out the environment variables correctly on my system. This in fact returns the values I expect.
public class Cool {
public static void main(String[] args) {
System.out.println(resolvePlaceholder(args[0]));
}
public static String resolvePlaceholder(String placeholderName) {
if (placeholderName.startsWith("env.")) {
// explicit env var prefix
System.out.println("1: placeholderName.startsWith(\"env.\")");
return System.getenv(placeholderName.substring("env.".length()));
}
String value = System.getProperty(placeholderName);
if (value != null) {
System.out.println("2: System.getProperty");
return value;
}
value = System.getenv(placeholderName);
if (value != null) {
System.out.println("3: System.getenv");
return value;
}
return "Map should've had it";
}
}
When run, this is the output, showing we are getting the set environment variables (keys hidden for obvious reasons):
[ec2-user#ip-172-31-34-195 ~]$ java Cool AWS_SECRET_KEY
3: System.getenv
XXXXXXXXXXXXXXXXXX
[ec2-user#ip-172-31-34-195 ~]$ java Cool AWS_ACCESS_KEY_ID
3: System.getenv
XXXXXXXXXXXXXXXXXX
What is it about elasticsearch that isn't able to parse my environment variables from elasticsearch.yml? I've done quite a bit of digging at this point but I'm sure there is a simple solution around the corner. Any help would be very much appreciated.
I figured out the issue.
As I am running elasticsearch as a linux service, rather than a shell application, it has access to no environment variables except for a very select few.
I added the following line to the end of /etc/sysconfig/elasticsearch to load the environment variables I wanted available to the program:
. /path/to/environment/variables

Acceleo java wrapping service doesn't take complex parameter - Invalid result for expression self.invoke

I can't call a java wrapping service in Acceleo because it doesn't recognize parameters type. This is my simple test code: the main calls a query stored in Services.mtl, that calls the java service that just return the name of an object "Send"
Main.mtl
[file ('system.P', false, 'UTF-8')]
[for (t : Send | aSystemBehavior.transitions)) ]
[getName(t)/]
[/for]
[/file]
Services.mtl
[query public getName(arg0 : Send) : String
= invoke('myPackage.Services', 'getName(myPackage.Send)', Sequence{arg0})
/]
Services.java
public class Services
{
public String getName(Send t)
{return t.getName();}
}
The Error Log shows:
Invalid result for expression
self.invoke('myPakage.Services',
'getName(myPakage.Send)', Sequence {arg0}) at line 0 in
Module services for query getName(Send). Last recorded value of self
was org.eclipse.emf.ecore.impl.DynamicEObjectImpl#1f00eb36 (eClass:
org.eclipse.emf.ecore.impl.EClassImpl#2c2aade3 (name: Send)
(instanceClassName: null) (abstract: false, interface: false)).
Problem found while generating the file system.P'.
If I use a String as parameter type instead of Send, everything works fine.
Does the package containing the service "Services" has been exported? If not, open the file MANIFEST.MF, go in the runtime tab and add its package to the list of exported packages. Are you sure that your "Send" object has a name? This message only indicates that null was returned by the query getName.
I don't have anymore this problem... I created a new Acceleo project from scratch, and it works. I am not sure what was the problem... maybe it's something about che choice of metamodels to import during the creation of the Module (I have to choose between run-tim and develop-time metamodel).

h2 with custom java alias and javac compiler issues in multi process environment

H2 database with custom function alias defined as:
create alias to_date as $$
java.util.Date toDate(java.lang.String dateString, java.lang.String pattern) {
try {
return new java.text.SimpleDateFormat(javaPattern).parse(dateString);
} catch(java.text.ParseException e) {
throw new java.lang.RuntimeException(e);
}
}
$$;
H2 initialized as:
jdbc:h2:mem:testdb;INIT=runscript from 'classpath:create_alias.sql
This is used in tests, executed for multiple projects concurrently on a Jenkins instance. Sometimes such tests would fail with following error:
Could not get JDBC Connection; nested exception is org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement "javac: file not found: org/h2/dynamic/TO_DATE.java
Usage: javac <options> <source files>
use -help for a list of possible options
"; SQL statement:
create alias to_date as $$
java.util.Date toDate(java.lang.String dateString, java.lang.String pattern) {
....
My guess is that org.h2.util.SourceCompiler is assuming that there is only one instance of h2 running at the time and writes the generated Java source to 'java.io.tmpdir', which is shared among all processes running under same account. I propose following fix:
Index: SourceCompiler.java
===================================================================
--- SourceCompiler.java (revision 5086)
+++ SourceCompiler.java (working copy)
## -40,7 +40,15 ##
*/
final HashMap<String, Class<?>> compiled = New.hashMap();
- private final String compileDir = Utils.getProperty("java.io.tmpdir", ".");
+ private final String compileDir;
+
+ {
+ // use random folder under java.io.tmpdir so multiple h2 could compile at the same time
+ // without overwriting each other files
+ File tmp = File.createTempFile("h2tmp", ".tmp");
+ tmp.mkdir();
+ compileDir = tmp.getAbsolutePath();
+ }
static {
Class<?> clazz;
Should I open the support ticket or there are workarounds for this issue?
You can use javax.tools.JavaCompiler API and provide your own implementation for in-memory JavaFileManager to completely avoid creating those temp files.
BTW, Janino also support javax.tools.JavaCompiler API.
I had the same problem running multiple Jenkins executors and having Arquillian/Wildfly/H2 integration tests configuration. I found a workaround by setting java.io.tmpdir property to the build directory in the test standalone.xml.

What is the alternative to PackageAdmin.getFragments

With OSGi 4.3, I understand PackageAdmin has been deprecated. How do you then find the fragments for a specific bundle - i.e. what is the alternative to PackageAdmin.getFragments(Bundle bundle)?
Use the BundleWiring API to find the provided wires in the osgi.wiring.host namespace:
BundleWiring myWiring = myBundle.adapt(BundleWiring.class);
List<BundleWire> wires = myWiring.getProvidedWires(HostNamespace.HOST_NAMESPACE);
for (BundleWire wire : wires) {
Bundle fragment = wire.getRequirerWiring().getBundle();
}

Resources