gogo: CommandNotFoundException: Command not found: services - osgi

I know some of the commands have changed names when Apache Felix started using GoGo
For example: ps --> lb (list bundles)
What is the equivalent for services <BUNDLENO>
I am trying to get the following output from my console:
services 5
Distributed OSGi Zookeeper-Based Discovery Single-Bundle Distribution (6) provides:
-----------------------------------------------------------------------------------
... other services ...
----
objectClass = org.osgi.service.cm.ManagedService
felix.fileinstall.filename = org.apache.cxf.dosgi.discovery.zookeeper.cfg
service.id = 38
service.pid = org.apache.cxf.dosgi.discovery.zookeeper
zookeeper.host = localhost
zookeeper.port = 2181
zookeeper.timeout = 3000

inspect capability service 5
check more details here
help inspect

Related

How we can run same feature file on multiple browser sequentially? [duplicate]

I am able to execute WebUI feature file against single browser (Zalenium) using parallel runner and defined driver in karate-config.js. How can we execute WebUI feature file against multiple browsers (Zalenium) using parallel runner or distributed testing?
Use a Scenario Outline and the parallel runner. Karate will run each row of an Examples table in parallel. But you will have to move the driver config into the Feature.
Just add a parallel runner to this sample project and try: https://github.com/intuit/karate/tree/master/examples/ui-test
Scenario Outline: <type>
* def webUrlBase = karate.properties['web.url.base']
* configure driver = { type: '#(type)', showDriverLog: true }
* driver webUrlBase + '/page-01'
* match text('#placeholder') == 'Before'
* click('{}Click Me')
* match text('#placeholder') == 'After'
Examples:
| type |
| chrome |
| geckodriver |
There are other ways you can experiment with, here is another pattern when you have a normal Scenario in main.feature - which you can then call later from a Scenario Outline from a separate "special" feature - which is used only when you want to do this kind of parallel-ization of UI tests.
Scenario Outline: <config>
* configure driver = config
* call read('main.feature')
Examples:
| config! |
| { type: 'chromedriver' } |
| { type: 'geckodriver' } |
| { type: 'safaridriver' } |
EDIT - also see this answer: https://stackoverflow.com/a/62325328/143475
And for other ideas: https://stackoverflow.com/a/61685169/143475
EDIT - it is possible to re-use the same browser instance for all tests and the Karate CI regression test does this, which is worth studying for ideas: https://stackoverflow.com/a/66762430/143475

Possible reasons for groovy program running as kubernetes job dumping threads during execution

I have a simple groovy script that leverages the GPars library's withPool functionality to launch HTTP GET requests to two internal API endpoints in parallel.
The script runs fine locally, both directly as well as a docker container.
When I deploy it as a Kubernetes Job (in our internal EKS cluster: 1.20), it runs there as well, but the moment it hits the first withPool call, I see a giant thread dump, but the execution continues, and completes successfully.
NOTE: Containers in our cluster run with the following pod security context:
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
Environment
# From the k8s job container
groovy#app-271df1d7-15848624-mzhhj:/app$ groovy --version
WARNING: Using incubator modules: jdk.incubator.foreign, jdk.incubator.vector
Groovy Version: 4.0.0 JVM: 17.0.2 Vendor: Eclipse Adoptium OS: Linux
groovy#app-271df1d7-15848624-mzhhj:/app$ ps -ef
UID PID PPID C STIME TTY TIME CMD
groovy 1 0 0 21:04 ? 00:00:00 /bin/bash bin/run-script.sh
groovy 12 1 42 21:04 ? 00:00:17 /opt/java/openjdk/bin/java -Xms3g -Xmx3g --add-modules=ALL-SYSTEM -classpath /opt/groovy/lib/groovy-4.0.0.jar -Dscript.name=/usr/bin/groovy -Dprogram.name=groovy -Dgroovy.starter.conf=/opt/groovy/conf/groovy-starter.conf -Dgroovy.home=/opt/groovy -Dtools.jar=/opt/java/openjdk/lib/tools.jar org.codehaus.groovy.tools.GroovyStarter --main groovy.ui.GroovyMain --conf /opt/groovy/conf/groovy-starter.conf --classpath . /tmp/script.groovy
groovy 116 0 0 21:05 pts/0 00:00:00 bash
groovy 160 116 0 21:05 pts/0 00:00:00 ps -ef
Script (relevant parts)
#Grab('org.codehaus.gpars:gpars:1.2.1')
import static groovyx.gpars.GParsPool.withPool
import groovy.json.JsonSlurper
final def jsl = new JsonSlurper()
//...
while (!(nextBatch = getBatch(batchSize)).isEmpty()) {
def devThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = dev + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
devResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
def stgThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = stg + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
stgResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
devThread.join()
stgThread.join()
}
Dockerfile
FROM groovy:4.0.0-jdk17 as builder
USER root
RUN apt-get update && apt-get install -yq bash curl wget jq
WORKDIR /app
COPY bin /app/bin
RUN chmod +x /app/bin/*
USER groovy
ENTRYPOINT ["/bin/bash"]
CMD ["bin/run-script.sh"]
The bin/run-script.sh simply downloads the above groovy script at runtime and executes it.
wget "$GROOVY_SCRIPT" -O "$LOCAL_FILE"
...
groovy "$LOCAL_FILE"
As soon as the execution hits the first call to withPool(poolSize), there's a giant thread dump, but execution continues.
I'm trying to figure out what could be causing this behavior. Any ideas 🤷🏽‍♂️?
Thread dump
For posterity, answering my own question here.
The issue turned out to be this log4j2 JVM hot-patch that we're currently leveraging to fix the recent log4j2 vulnerability. This agent (running as a DaemonSet) patches all running JVMs in all our k8s clusters.
This, somehow, causes my OpenJDK 17 based app to thread dump. I found the same issue with an ElasticSearch 8.1.0 deployment as well (also uses a pre-packaged OpenJDK 17). This one is a service, so I could see a thread dump happening pretty much every half hour! Interestingly, there are other JVM services (and some SOLR 8 deployments) that don't have this issue 🤷🏽‍♂️.
Anyway, I worked with our devops team to temporarily exclude the node that deployment was running on, and lo and behold, the thread dumps disappeared!
Balance in the universe has been restored 🧘🏻‍♂️.

Is there a simply way to list of consumer components for an Interface?

Fellow coders,
I'm currently trying to find a simple and concise way to get a listing of of Services/Components that use a given Interface. I'm using the gogo-shell of a running Liferay 7.1.x server and can't seem to find an easy and direct way to to just that.
We want to override references to the used service via OSGI-configuration, but first need to find all components using it.
As there are static reluctant references to the service component, simply providing an alternative with a higher ranking is not a viable solution.
Here are the gogo related bundles I'm using:
35|Active | 6|Apache Felix Gogo Command (1.0.2)|1.0.2
36|Active | 6|Apache Felix Gogo Runtime (1.1.0.LIFERAY-PATCHED-2)|1.1.0.LIFERAY-PATCHED-2
72|Active | 6|Apache Felix Gogo Shell (1.1.0)|1.1.0
542|Active | 10|Liferay Foundation - Liferay Gogo Shell - Impl (1.0.13)|1.0.13
543|Active | 10|Liferay Gogo Shell Web (2.0.25)|2.0.25
So far I've been able to list all providers of an interface via se (interface=com.liferay.saml.runtime.servlet.profile.WebSsoProfile):
{com.liferay.saml.runtime.profile.WebSsoProfile, com.liferay.saml.runtime.servlet.profile.WebSsoProfile}={service.id=6293, service.bundleid=79, service.scope=bundle, component.name=com.liferay.saml.opensaml.integration.internal.servlet.profile.WebSsoProfileImpl, component.id=3963}
"Registered by bundle:" de.haufe.leong.com.liferay.saml.opensaml.integration [79]
"Bundles using service"
com.liferay.saml.web_2.0.11 [82]
com.liferay.saml.impl_2.0.12 [78]
See all bundle requirements via: inspect cap service:
com.liferay.saml.impl_2.0.12 [78] requires:
...
service; com.liferay.saml.runtime.profile.WebSsoProfile, com.liferay.saml.runtime.servlet.profile.WebSsoProfile provided by:
de.haufe.leong.com.liferay.saml.opensaml.integration [79]
...
But listing the actual services from within these bundles that use the given interface (or the service component) has eluded me so far.
The only solution I see so far is listing all provided services of these bundles via scr:list bid and then check each service with scr:info componentId to see if it uses the WebSsoProfile service.
Do you guys know a faster way to find the services using the WebSsoProfile-service?
[EDIT]: We solved the problem without having to provide config overrides for all consumers of the WebSsoProfile service but rather ensure that our implementation is used by deactivating the default service on Server startup. You can see the approach described here.
Anyways for debugging purposes this kind of lookup would be very useful.
So if anyone knows a way to retrieve the list of all consumers of an interface then please post your solution!
The standard solution is using the inspect command. It has a special namespace for services. Since a service registration is a capability, you can use inspect capability service:
g! inspect c service
org.apache.felix.framework [0] provides:
----------------------------------------
service; org.osgi.service.resolver.Resolver with properties:
service.bundleid = 0
service.id = 1
service.scope = singleton
service; org.osgi.service.packageadmin.PackageAdmin with properties:
service.bundleid = 0
service.id = 2
service.scope = singleton
service; org.osgi.service.startlevel.StartLevel with properties:
service.bundleid = 0
service.id = 3
service.scope = singleton
....
However, I find this command seriously useless. The command is inflexible and has a horrible output.
However, Gogo is way more powerful than people know. For one, you can use all the methods on the bundle context.
g! servicereferences org.osgi.service.startlevel.StartLevel null
000003 0 StartLevel
If you want to see the properties of each service:
g! each (servicereferences org.osgi.service.startlevel.StartLevel null) { $it properties }
[service.id=3, objectClass=[Ljava.lang.String;#4acd14d7, service.scope=singleton, service.bundleid=0]
You can make this into a built-in function:
g! srv = { servicereferences $1 null }
servicereferences $1 null
g! srv org.osgi.service.startlevel.StartLevel
000003 0 StartLevel
Unfortunately, the OSGi added an overloaded method in the Bundle Context for getServiceReferences() that throws an NPE when called with null. Gogo is awful with overloaded methods :-(
However, it is trivial to add your own command with a declarative service component. You could use the following:
#GogoCommand(scope="service", function="srv")
#Component(service=ServiceCommand.class)
public class ServiceCommand {
#Activate
BundleContext context;
#Descriptor("List all services")
public ServiceReference<?>[] srv() throws InvalidSyntaxException {
return context.getAllServiceReferences(null, null);
}
#Descriptor("List all services that match the name")
public ServiceReference<?>[] srv(
String... names)
throws InvalidSyntaxException {
ServiceReference<?>[] allServiceReferences =
context.getAllServiceReferences(null,null);
if ( allServiceReferences==null)
return new ServiceReference[0];
return Stream.of(allServiceReferences)
.filter(r -> {
String[] objectClass = (String[]) r.getProperty(Constants.OBJECTCLASS);
for (String oc : objectClass) {
for (String name : names)
if (oc.contains(name))
return true;
}
return names.length == 0;
}).sorted().toArray(ServiceReference[]::new);
}
}
This adds the srv command to Gogo:
g! srv Help Basic
000004 1 Basic
000005 1 Inspect
Update If you want to find which bundles are using a specific service, you could use:
g! each (srv X) { $it usingbundles }
Make sure you got the following dependencies on your classpath:
-buildpath: \
org.osgi.service.component.annotations,\
org.apache.felix.gogo.runtime, \
org.osgi.framework

MapR installation failing for single node cluster

I was referring quick installation guide for single node cluster. For this i used 20GB storage file for MaprFS but while on installation , it is giving ' Unable to find disks: /maprfs/storagefile' .
Here is my configuration file.
# Each Node section can specify nodes in the following format
# Hostname: disk1, disk2, disk3
# Specifying disks is optional. If not provided, the installer will use the values of 'disks' from the Defaults section
[Control_Nodes]
maprlocal.td.td.com: /maprfs/storagefile
#control-node2.mydomain: /dev/disk3, /dev/disk9
#control-node3.mydomain: /dev/sdb, /dev/sdc, /dev/sdd
[Data_Nodes]
#data-node1.mydomain
#data-node2.mydomain: /dev/sdb, /dev/sdc, /dev/sdd
#data-node3.mydomain: /dev/sdd
#data-node4.mydomain: /dev/sdb, /dev/sdd
[Client_Nodes]
#client1.mydomain
#client2.mydomain
#client3.mydomain
[Options]
MapReduce1 = true
YARN = true
HBase = true
MapR-DB = true
ControlNodesAsDataNodes = true
WirelevelSecurity = false
LocalRepo = false
[Defaults]
ClusterName = my.cluster.com
User = mapr
Group = mapr
Password = mapr
UID = 2000
GID = 2000
Disks = /maprfs/storagefile
StripeWidth = 3
ForceFormat = false
CoreRepoURL = http://package.mapr.com/releases
EcoRepoURL = http://package.mapr.com/releases/ecosystem-4.x
Version = 4.0.2
MetricsDBHost =
MetricsDBUser =
MetricsDBPassword =
MetricsDBSchema =
Below is the error that i am getting.
2015-04-16 08:18:03,659 callbacks 42 [INFO]: Running task: [Verify Pre-Requisites]
2015-04-16 08:18:03,661 callbacks 87 [ERROR]: maprlocal.td.td.com: Unable to find disks: /maprfs/storagefile from /maprfs/storagefile remove disks: /dev/sda,/dev/sda1,/dev/sda2,/dev/sda3 and retry
2015-04-16 08:18:03,662 callbacks 91 [ERROR]: failed: [maprlocal.td.td.com] => {"failed": true}
2015-04-16 08:18:03,667 installrunner 199 [ERROR]: Host: maprlocal.td.td.com has 1 failures
2015-04-16 08:18:03,668 common 203 [ERROR]: Control Nodes have failures. Please fix the failures and re-run the installation. For more information refer to the installer log at /opt/mapr-installer/var/mapr-installer.log
Please help me here.
Thanks
Shashi
Error is resolved by adding skip-check new option after install
/opt/mapr-installer/bin/install --skip-checks new

How to configure collectd-snmp to poll a router?

I am trying to use a Raspberry Pi to poll the interface MIB (IF:MIB) of a TP-LINK router and then send the metrics to Librato.
Setting up collectd and integrating it with Librato is no problem at all - I am successfully tracking other metrics (cpu, memory, etc.). The challenge I have is with the collectd-snmp plugin configuration.
I installed net-snmp and can "see" the router:
pi#raspberrypi ~ $ snmpwalk -v 1 -c public 192.168.0.1 IF-MIB::ifInOctets
IF-MIB::ifInOctets.2 = Counter32: 1206812646
IF-MIB::ifInOctets.3 = Counter32: 1548296842
IF-MIB::ifInOctets.5 = Counter32: 19701783
IF-MIB::ifInOctets.10 = Counter32: 0
IF-MIB::ifInOctets.11 = Counter32: 0
IF-MIB::ifInOctets.15 = Counter32: 0
IF-MIB::ifInOctets.16 = Counter32: 0
IF-MIB::ifInOctets.22 = Counter32: 0
IF-MIB::ifInOctets.23 = Counter32: 0
The Pi is on 192.168.0.20, the router on 192.168.0.1.
My collectd.conf is as follows:
<Plugin snmp>
<Data "ifmib_if_octets32">
Type "if_octets"
Table true
Instance "IF-MIB::ifDescr"
Values "IF-MIB::ifInOctets" "IF-MIB::ifOutOctets"
</Data>
<Host "localhost">
Address "192.168.0.1"
Version 1
Community "public"
Collect "ifmib_if_octets32"
Interval 60
</Host>
</Plugin>
When I restart collectd I get the following error:
pi#raspberrypi ~ $ sudo service collectd restart
[....] Restarting statistics collection and monitoring daemon: collectdNo log handling enabled - turning on stderr logging
MIB search path: $HOME/.snmp/mibs:/usr/share/mibs/site:/usr/share/snmp/mibs:/usr/share/mibs/iana:/usr/share/mibs/ietf:/usr/share/mibs/netsnmp
Cannot find module (IF-MIB): At line 0 in (none)
[2015-01-24 23:01:31] snmp plugin: read_objid (IF-MIB::ifDescr) failed.
[2015-01-24 23:01:31] snmp plugin: No such data configured: `ifmib_if_octets32'
No log handling enabled - turning on stderr logging
MIB search path: $HOME/.snmp/mibs:/usr/share/mibs/site:/usr/share/snmp/mibs:/usr/share/mibs/iana:/usr/share/mibs/ietf:/usr/share/mibs/netsnmp
Cannot find module (IF-MIB): At line 0 in (none)
[2015-01-24 23:01:33] snmp plugin: read_objid (IF-MIB::ifDescr) failed.
[2015-01-24 23:01:33] snmp plugin: No such data configured: `ifmib_if_octets32'
. ok
It obviously can't find the MIB, it doesn't even seem to be looking at the router's IP. Any suggestions on how to configure this correctly?
I figured it out:
<Plugin snmp>
<Data "if_Octets">
Type "if_octets"
Table true
Values "IF-MIB::ifInOctets" "IF-MIB::ifOutOctets"
</Data>
<Host "tp-link">
Address "192.168.0.1"
Version 1
Community "public"
Collect "if_Octets"
Interval 60
</Host>
</Plugin>

Resources