I'd like to create a setup to evaluate messaging with JMS. The target environment would be a normal Payara, yet to have a simple setup, I'd like to test things out with Payara Micro (bundled jar). This ways, I'd like to create a setup that can be ported easily. With JNDI lookups, there should be no problems with the code in this regard. Also, the coding part isn't really hard. Things I'd like to test with this setup:
- Consumer using message driven beans
- Producer
- accessing management queue (as I'd like to test how to enable blue/green-deployment)
Using the rar of the classic ActiveMQ, things were comparebly easy. I set up a post-boot-commands.txt to deploy and config the resource adapter with the following content:
create-resource-adapter-config --property ServerUrl='tcp://localhost:61616':UserName='admin':Password='admin' activemq-rar-5.15.11
create-connector-connection-pool --raname activemq-rar-5.15.11 --connectiondefinition javax.jms.ConnectionFactory --ping true --isconnectvalidatereq true jms/myConnectionPool
create-connector-resource --poolname jms/myConnectionPool jms/myConnectionFactory
create-admin-object --raname activemq-rar-5.15.11 --restype javax.jms.Queue --property PhysicalName=Q1 jms/myQueue
This lets Payara Micro deploy and config the rar before deploying my apps war-file. The message driven bean could then be written with this configuration:
#MessageDriven(activationConfig = {
#ActivationConfigProperty(propertyName = "destination", propertyValue = "Q1"),
#ActivationConfigProperty(propertyName = "resourceAdapter", propertyValue = "activemq-rar-5.15.11"),
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue")
})
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public class MyMDB implements MessageListener {
...
}
As the Producer was easy, I'll skip that part here. Things worked well until I started to work with the management queue. Following the management example coming with the broker (which uses some deprecated code :(), I ran into conflicts as the solution used code from the artemis client which then conflicted with the ConnectionFactory classes from the classic ActiveMQ rar. As I have a bad feeling using the classic ActiveMQs rar with ActiveMQ Artemis, I tried to switch to the artemis rar. Unfortunately, finding information about how to config the resource adapter with Payara means turned out to be hell on earth.
By taking a look at the sources of the class ActiveMQResourceAdapter, I figured out the following configuration:
deploy --type rar /home/tools/artemis-rar-2.11.0.rar
create-resource-adapter-config --property connectionParameters='host=localhost;port=61616':JndiParams='java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory;connectionFactory.ConnectionFactory=tcp://localhost:61616;queue.jms/myQueue=Q1':useJndi='true':entries='ConnectionFactory':userName='admin':password='admin' artemis-rar-2.11.0
create-connector-connection-pool --raname artemis-rar-2.11.0 --connectiondefinition javax.jms.ConnectionFactory --ping true --isconnectvalidatereq true jms/ConnectionFactoryPool
create-connector-resource --poolname jms/myConnectionPool jms/myConnectionFactory
create-admin-object --raname artemis-rar-2.11.0 --restype javax.jms.Queue --property PhysicalName=Q1 jms/myQueue
The JNDI-properties are a try to mimic the contents of the jndi.properties from the examples. The good part is, that on startup Payara Micro says:
[2020-03-26T20:51:58.812+0100] [] [INFO] [] [org.apache.activemq.artemis.ra] [tid: _ThreadID=48 _ThreadName=pool-18-thread-1] [timeMillis: 1585252318812] [levelValue: 800] AMQ151007: Resource adaptor started
The bad news is that it then continues with:
[2020-03-26T20:51:58.843+0100] [] [WARNUNG] [] [fish.payara.boot.runtime.BootCommand] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1585252318843] [levelValue: 900] Boot Command create-connector-connection-pool failed PlainTextActionReporterFAILUREInvalid connection definition. Connector Module with connection definition javax.jms.ConnectionFactory not found.
And:
[2020-03-26T20:51:58.850+0100] [] [WARNUNG] [] [fish.payara.boot.runtime.BootCommand] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1585252318850] [levelValue: 900] Boot Command create-connector-resource failed PlainTextActionReporterFAILUREAttribute value (pool-name = jms/myConnectionPool) is not found in list of connector connection pools.
And:
[2020-03-26T20:51:58.856+0100] [] [WARNUNG] [] [fish.payara.boot.runtime.BootCommand] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1585252318856] [levelValue: 900] Boot Command create-admin-object failed PlainTextActionReporterFAILUREResource Adapter artemis-rar-2.11.0 does not contain any resource type for admin-object. Please specify another res-adapter.
So, it fails to register a connection factory and a queue. As a consequence, the application throws exceptions later on when looking up resources.
I have to admit that I am not experienced with JMS and resource adapters / JCAs. It's been frustrating as I have burned days with this already. So, any help with this is welcome.
Answering my own question now. It feels like it took me ages to figure this out, but I finally got it working. So, the correct configuration with asadmin is as follows:
deploy --type rar /home/tools/artemis-rar-2.11.0.rar
create-resource-adapter-config --property ConnectorClassName='org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory':ConnectionParameters='host=localhost;port=61616':UserName='admin':Password='admin' artemis-rar-2.11.0
create-connector-connection-pool --raname artemis-rar-2.11.0 --connectiondefinition org.apache.activemq.artemis.ra.ActiveMQRAConnectionFactory --ping true jms/ConnectionFactoryPool
create-connector-resource --poolname jms/ConnectionFactoryPool jms/myConnectionFactory
As you can see, there is no configuration for an admin object. The reason is, that the artemis rar seemingly does not provide any admin object. This ways, you can't lookup your destinations (queues and topics) via jndi, but need to create them with the JMS-session using the destinations physical name. Now, the configuration of the MDB:
#MessageDriven(activationConfig = {
#ActivationConfigProperty(propertyName = "destination", propertyValue = "Q1"),
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
#ActivationConfigProperty(propertyName = "resourceAdapter", propertyValue = "artemis-rar-2.11.0")
})
public class MyMDB implements MessageListener {
...
}
There is one problem, though: you can't access the management queue to control the broker. Although you can create the session and the destination, the messages need to be of a certain class. The necessary class, however, is not returned by the configured connection factory leading to exceptions on runtime. So, one needs to search for other approaches to access the management part.
Having said all this, I'd like to share some constructive critics for the case that the devs of Artemis my stumble across this. Although the documentation explains that for Java EE users there is an JCA architecture for artemis, it is nowhere explained how to set it up / configure it. There is not even a link to the rar file on maven (which has a strange goup-id btw). Of course, there is plenty of examples coming with Artemis, but from what I can see there is none showing how to set up the rar. Instead, they are set up using the client-jar, but I doubt this approach would work with MDBs. The point to start with, was the rar-example that showed the configuration properties, but not their values (at least, not for the ConnectorClassName property). One can then only take a look at the sources on github and try to transform the configuration other users used for other application servers. Let me know if I got something wrong with my approach, but things were much simpler to set up with the classic ActiveMQ.
Related
This is regarding CDI spec of quarkus. Would want to understand is there a configuration bean for quarkus? How does one do any sort of configuration in quarkus?
If I get it right the original question is about #Configuration classes that can contain #Bean definitions. If so then CDI producer methods and fields annotated with #javax.enterprise.inject.Produces are the corresponding alternative.
Application configuration is a completely different question though and Jay is right that the Quarkus configuration reference is the ultimate source of information ;-).
First of all reading how cdi spec of quarkus differs from spring is important.
Please refer this guide:
https://quarkus.io/guides/cdi-reference
The learnings from this guide is there is #Produces which is an alternative to #Configuration bean in Quarkus.
Let us take an example for libs that might require a configuration through code. Example: Microsoft Azure IOT Service Client.
public class IotHubConfiguration {
#ConfigProperty(name="iothub.device.connection.string")
String connectionString;
private static final Logger LOG = Logger.getLogger(IotHubConfiguration.class);
#Produces
public ServiceClient getIot() throws URISyntaxException, IOException {
LOG.info("Inside Service Client bean");
if(connectionString==null) {
LOG.info("Connection String is null");
throw new RuntimeException("IOT CONNECTION STRING IS NULL");
}
ServiceClient serviceClient = new ServiceClient(connectionString, IotHubServiceClientProtocol.AMQPS);
serviceClient.open();
LOG.info("opened Service Client Successfully");
return serviceClient;
}
For all libs vertically intergrated with quarkus application.properties can be used and then you will get a driver obj for that broker/dbs available directly through #Inject in your #applicationScoped/#Singleton bean So, Why is that?
To Simplify and Unify Configuration
To Make Sure no code is required for configuring anything i.e. database config, broker config , quarkus config etc.
This drastically reduces the amount of code written for configuring and also Junits needed to cover that code.
Let us take an example where kafka producer configuration needs to be added: in application.properties
kafka.bootstrap.servers=${KAFKA_BROKER_URL:localhost:9092}
mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:input_topic1}
mp.messaging.outgoing.incoming_kafka_topic_test.connector=smallrye-kafka
mp.messaging.outgoing.incoming_kafka_topic_test.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.outgoing.incoming_kafka_topic_test.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.outgoing.incoming_kafka_topic_test.health-readiness-enabled=true
For full blown project reference: https://github.com/JayGhiya/QuarkusExperiments/tree/initial_version_v1/KafkaProducerQuarkus
Quarkus References for Config:
https://quarkus.io/guides/config-reference
Example for reactive sql config: https://quarkus.io/guides/reactive-sql-clients
Now let us talk about a bonus feature that quarkus provides which improves developer experience by atleast an order of magnitude that is profile driven development and testing.
Quarkus provides three profiles:
dev - Activated when in development mode (i.e. quarkus:dev)
test - Activated when running tests
prod - The default profile when not running in development or test
mode
Let us just say that in the given example you wanted to have different topics for development and different topics for production. Let us achieve that!
%dev.mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:input_topic1}
%prod.mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:prod_topic}
This is how simple it is. This is extremely useful in cases where your deployments run with ssl enabled brokers/dbs etc and for dev purposes you have unsecure local brokers/dbs. This is a game changer.
I am trying to set up Jaeger to collect traces from a spring boot application. When my app starts up, I am getting this warning message
warn io.jaegertracing.internal.senders.SenderResolver - No sender factories available. Using NoopSender, meaning that data will not be sent anywhere!
I use this method to get the jaeger tracer
#Bean
Tracer jaegerTracer(#Value(defaulTraceName) String service)
{
SamplerConfiguration samplerConfig = SamplerConfiguration.fromEnv().withType("const").withParam(1);
ReporterConfiguration reporterConfig = ReporterConfiguration.fromEnv().withLogSpans(true);
Configuration config = new Configuration(service).withSampler(samplerConfig).withReporter(reporterConfig);
return config.getTracer();
}
I have manually instrumented the code, but no traces show up in the jaeger UI. I have been stuck on this problem for a few days now and would appreciate any help given!
In my pom file, I have dependencies on jaeger-core and opentracing-api
Solved by adding dependency in pom file on jaeger-thrift.
Hi I am migrating to wildfly 10 from JBoss_6.1.0_final.
In JBoss for Queue name the format is like
<queue name="TEST_QUEUE">
<entry name="/queue/TEST_QUEUE"/>
</queue>
and in MDB annotation is
#ActivationConfigProperty(propertyName = "destination",
propertyValue = "queue/TEST_QUEUE")
Now in wildfly its like below. reference link
<jms-queue name="TEST_QUEUE" entries="jms/queue/TEST_QUEUE java:jboss/exported/jms/queue/TEST_QUEUE"/>
with activationproperty
#ActivationConfigProperty(propertyName = "destination",
propertyValue = "jms/queue/TEST_QUEUE")
In wildfly I have tried by removing the jms/ from queue name and from annotation, its working fine in wildfly with same queue name ,
like
<jms-queue name="TEST_QUEUE" entries="queue/TEST_QUEUE java:jboss/exported/queue/TEST_QUEUE"/>
Now my question is, Is JMS/ in queue name added purposefully.
it is good practice to write queue name without prefix jms/
From the JEE JSR part EE.5.7.1.2 Programming Interfaces for Resource Manager Connection Factory References
This specification recommends, but does not require, that all resource manager connection factory references be organized in the subcontexts of the application component’s environment, using a different subcontext for each resource manager type. For example, all JDBC™ DataSource references should be declared in the java:comp/env/jdbc subcontext, all JMS connection factories in the java:comp/env/jms subcontext, all JavaMail connection factories in the java:comp/env/mail subcontext, and all URL connection factories in the java:comp/env/url subcontext. Note that resource manager connection factory references declared via annotations will not, by default, appear in any subcontext
the jms subcontext is not mandatory. It is just a best practice.
Servers can or not follow this pattern. JBoss was not following this, wildfly is, but ultimately, it is YOUR decision to do what you want. But this is a really good practice to follow as it is cleaner for everybody.
I'm working with infinispan 8.1.0 Final and Wildfly 10 in a cluster set up.
Each server is started running
C:\wildfly-10\bin\standalone.bat --server-config=standalone-ha.xml -b 10.09.139.215 -u 230.0.0.4 -Djboss.node.name=MyNode
I want to use Infinispan in distributed mode in order to have a distributed cache. But for mandatory requirements I need to build a JGroups channel for dynamically reading some properties from a file.
This channel is necessary for me to build a cluster-group based on TYPE and NAME (for example Type1-MyCluster). Each server who wants to join a cluster has to use the related channel.
Sailing the net I have found some code like the one below:
public class JGroupsChannelServiceActivator implements ServiceActivator {
#Override
public void activate(ServiceActivatorContext context) {
stackName = "udp";
try {
channelServiceName = ChannelService.getServiceName(CHANNEL_NAME);
createChannel(context.getServiceTarget());
} catch (IllegalStateException e) {
log.log(Level.INFO, "channel seems to already exist, skipping creation and binding.");
}
}
void createChannel(ServiceTarget target) {
InjectedValue<ChannelFactory> channelFactory = new InjectedValue<>();
ServiceName serviceName = ChannelFactoryService.getServiceName(stackName);
ChannelService channelService = new ChannelService(CHANNEL_NAME, channelFactory);
target.addService(channelServiceName, channelService)
.addDependency(serviceName, ChannelFactory.class, channelFactory).install();
}
I have created the META-INF/services/....JGroupsChannelServiceActivator file.
When I deploy my war into the server, the operation fails with this error:
"{\"WFLYCTL0180: Services with missing/unavailable dependencies\" => [\"jboss.jgroups.channel.clusterWatchdog is missing [jboss.jgroups.stack.udp]\"]}"
What am I doing wrong?
How can I build a channel the way I need?
In what way I can tell to infinispan to use that channel for distributed caching?
The proposal you found is implementation dependent and might cause a lot of problems during the upgrade. I wouldn't recommend it.
Let me check if I understand your problem correctly - you need to be able to create a JGroups channel manually because you use some custom properties for it.
If that is the case - you could obtain a JGroups channel as suggested here. But then you obtain a JChannel instance which is already connected (so this might be too late for your case).
Unfortunately since Wildfly manages the JChannel (it is required for clustering sessions, EJB etc) the only way to get full control of JChannel creating process is using Infinispan embedded (library) mode. This would require adding infinispan-embedded into your WAR dependencies. After that you can initialize it similarly to this test.
So I was loosely following the Amdatu JPA video tutorial and I almost got it working...
At a glance everything seems to be fine, only DataSource service is not resolved and I don't know why. It seems to me that it is registered. So how would I go debugging this, there should be some way to debug this, right?
When starting I have this in msg log:
[CM Configuration Updater (Update: pid=org.amdatu.jpa.datasourcefactory.dd8bf61e-01b1-4732-9b0c-bba96e1f5aff)] DEBUG org.amdatu.jpa.datasourcefactory - ServiceEvent REGISTERED - [javax.sql.DataSource] - org.amdatu.jpa.datasourcefactory
Output of "dm":
[5] org.amdatu.jpa.datasourcefactory
org.osgi.service.cm.ManagedServiceFactory(service.pid=org.amdatu.jpa.datasourcefactory) registered
org.osgi.service.log.LogService service optional available
javax.sql.DataSource(validationQuery=SELECT 1,name=ManagedDS,driverName=postgresql,serviceName=ManagedDS) registered
org.osgi.service.log.LogService service optional available
org.osgi.service.jdbc.DataSourceFactory (osgi.jdbc.driver.class=org.postgresql.Driver) service required available
javax.transaction.TransactionManager service required available
So the output above should mean that DataSource is registered, right?
[31] org.amdatu.jpa.extender
org.amdatu.jpa.extender.PersistenceBundleManager() registered
org.osgi.service.log.LogService service optional available
javax.persistence.spi.PersistenceProvider service required available
active (Meta-Persistence=*) bundle optional available
java.lang.Object(bundle=32) registered
org.osgi.service.log.LogService service optional available
org.amdatu.jpa.extender.PersistenceBundleManager service required available
org.amdatu.jpa.extender.PersistenceUnitInfoImpl#7175ee92 unregistered
javax.persistence.spi.PersistenceProvider (javax.persistence.provider=org.eclipse.persistence.jpa.PersistenceProvider) service required available
javax.sql.DataSource (name=ManagedDS) service required unavailable
Everything further that depends on DataSource is obviously not resolved
javax.persistence.EntityManager service required unavailable
So what I don't get is why is DataSource not resolved there? I checked and it seems it is registered with property name=ManagedDS, but I am quite new to Felix DS so I am not really sure what is happening here.
I also tried adding this
#ServiceDependency(filter="(name=ManagedDS)")
private volatile DataSource ds;
to one of my services, but that too cannot be resolved. Thanks for any help regarding this, but what I would be most grateful of would be a way to debug and solve this myself.
So, Amdatu video tutorial suggested I should add
Import-Package: javax.sql;version=1.0.0
to my bundles. I tried removing that and it works (I did that when it stopped resolving that import after I set all versions to small ranges. Still don't know why it did that and wish that I tried that sooner)
So my guess as to why it works now - packages in my OSGi container were probably using two different versions/instances of javax.sql.DataSource. Probably one from postgres package and other someplace else (system?). Maybe one of the OSGi gurus can comment on this and clear it up?
Another sub-question is as that video suggested it is a good thing to add that import, what can I do to make it work or if it not important should I just not bother?