JMS Activation spec on Liberty: "WAS_EndpointInitialState" full profile equivalent property? - websphere-liberty

We are migrating some apps from WAS full profile to WAS Liberty profile.
Some apps have MDBs and need JMS Activation Specs definitions connected to MQ.
In order to enforce strict FIFO ordering of messages in a cluster, we set the "WAS_EndpointInitialState" property to "INACTIVE" on those Activation Specs to tell WAS full profile to not start the Activation Spec on startup. When the cluster starts, we start (ie "resume") the activation on one server only.
Q: How to achieve this with Liberty (v16.0.x) ?
I don't see an equivalent parameter within the "properties.wmqJms" stanza.
Thanks

Liberty doesn't have an equivalent parameter/capability for activation specs.
You can open a request for enhancement here:
https://www.ibm.com/developerworks/rfe/?PROD_ID=544
In case it helps during the meantime, a crude way of simulating the capability is to start the server with the jmsActivationSpec elements commented out, and make configuration updates to uncomment as you want them activated.

Unfortunately as-is (with v16.0.0.3 and the current beta version), it is not possible to deploy an application with MDBs in production due to a serious lack on functionalities in Liberty profile (JMS Activations).
When using the jmsActivationSpec+ properties.wmqJms stanzas, it is impossible:
to configure the activation to stop after x failed tentatives to consume the message. Liberty tries to consume the message forever without any notification!!
to start the activation in an inactive state on startup., so it is impossible to enforce the FIFO paradigm on a Q when deployed in a cluster (or collective or other form of cluster)
Those are already captured in the following RFE:
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=95885
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=95794
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=88543
For us it's a clear no-go to move to WebSphere Liberty profile for those reasons

This is way too late for the OP, but in case someone comes here looking for a current answer.
Liberty / Open Liberty now offer (as of 18.0.0.1) such a function, which you can enable via the autoStart attribute, e.g.:
<jmsActivationSpec autoStart="false" id="myJMSActSpec"/>
See here for a quick example of how you would use the EndpointControl MBean and/or the server resume CLI command to start message delivery into the server.

Related

Configure timing of opening ports in Spring-Boot application

Question:
Is there an option within spring or its embedded servlet container to open ports when spring is ready to handle traffic?
Situation:
In the current setup i use a spring boot application running in google cloud run.
Circumstances:
Cloud run does not support liveness/readyness probes, it considers an open port as "application ready".
Cloud run sends request to the container although spring is not ready to handle requests.
Spring start its servlet container, open its ports while still spinning up its beans.
Problem:
Traffic to an unready application will result in a lot of http 429 status codes.
This affects:
new deployments
scaling capabilities of cloud run
My desire:
Configure spring/servlet container to delay opening ports when application is actually ready
Delaying opening ports to the time the application is ready would ease much pain without interfering too much with the existing code base.
Any alternatives not causing too much pain?
Things i found and considered not viable
Using native-image is not an option as it is considered experimental and consumes more RAM at compile time than our deployment pipeline agents allow to allocate (max 8GB vs needed 13GB)
another answer i found: readiness check for google cloud run - how?
which i don't see how it could satisfy my needs, since spring-boot startup time is still slow. That's why my initial idea was to delay opening ports
I did not have time to test the following, but one thing i stumbled upon is
a blogpost about using multiple processes within a container. Though it is against the recommendation of containers principles, it seems viable for the time until cloud run supports probes of any type.
As you are well aware of the fact that “Cloud Run currently does not have a readiness/liveness check to avoid sending requests to unready applications” I would say there is not much that can be done on Cloud Run’s side except :
Try and optimise the Spring boot app as per the docs.
Make a heavier entrypoint in Cloud Run service that takes care of
more setup tasks. This stackoverflow thread mentions how “A
’heavier’ entrypoint will help post-deploy responsiveness, at the
cost of slower cold-starts” ( this is the most relevant solution
from a Cloud Run perspective and outlines the issue correctly)
Run multiple processes in a container in Cloud Run as you
mentioned.
This question seems more directed at Spring Boot specifically and I found an article with a similar requirement.
However, if you absolutely need the app ready to serve when requests come in, we have another alternative to Cloud Run, Google Kubernetes Engine (GKE) which makes use of readiness/liveness probes.

How to crash Jboss based on some condition

I am using JBoss 7x, and have the following use case.
I am going to do load testing of messaging queues with Jboss. The queues are external to JBoss.
I will push a lot of message in the queue, around 1000 message. When around 100+ message has been pushed I want to crash JBoss. Later I want to re-start the Jboss the verify the message processing.
I had earlier made use of Byteman to crash the JVM using the following
JAVA_OPTS="-javaagent:/BYTEMAN_HOME/lib/byteman.jar=script:/QUICKSTART_HOME/jta-crash-rec/src/main/scripts/xa.btm ${JAVA_OPTS}"
Details are here: https://github.com/Naresh-Chaurasia/jboss-eap-quickstarts/tree/7.3.x/jta-crash-rec
In the above case when ever XA Transaction is happening the JVM is being crashed using byteman, but in my case I want to only crash the JVM/Jboss lets say after 100+ messages. i.e not for each transaction but after processing some messages.
I have also tried a few examples from here, to get ideas of how to achieve it, but did not succeed. https://developer.jboss.org/docs/DOC-17213#top
Question: How can I crash JBoss/ running JVM using byteman or some other way.
See the Programmers Guide that comes bundled with the distribution.
Sections headed "CountDowns" and "Aborting Execution" provide what's necessary. These are built-in features of the Rule Language.

Stop WebSphere Liberty server if application fails to start

I'm looking for an option to stop my Liberty server if a specific application fails to start.
I can't find any option for this in the docs, the closest thing I've found to achieve this is health policies but they don't look to be a good fit.
You could write something (even as simple as a bash script) to check the logs for:
"CWWKZ0001I: Application {appName} started"
and if you don't see it in x time, then execute "/wlp/bin/server stop {serverName}"
Could do it all through mbean invocations via java or REST calls by checking the state of the app (WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=*) and if it's not 'Started' by x time, then invoke an api to stop it (for collective environment you could use WebSphere:feature=collectiveController,type=ServerCommands,name=ServerCommands, otherwise you could use the osgi framework api).

is it possible to get an alert when application went down?

I am using WebSphere 6.2 and my requirement is I admin have to get an alert when the application on the server stopped or server down .. How to achieve this ? In higher versions this feature is there ? please help me
Thanks in Advance ,
Raj
I'm assuming you mean WebSphere application server processes (rather than the physical server on which WebSphere is running) and individual applications running on those processes. I'm also assuming you mean when those elements have stopped unexpectedly rather than when somebody has deliberately stopped them.
If so, you're going to have to use external monitoring software to detect most of those conditions. We use a combination of scripts that scan for processes and specific error messages in logs and external site-monitoring software that checks for application responsiveness. Such scripts can be standalone, handwritten scripts, or run under generic monitoring tools from IBM (Tivoli) or 3rd-parties.
Alternatively, I think you should also be able to write something that uses JMX to read specific things about WebSphere state, and there is at least one sophisticated monitoring tool you could purchase, IBM Tivoli Composite Application Manager (ITCAM) for Application Diagnostics, which can monitor WebSphere internals.

What is Command level in MQ Series?

In Websphere MQ series , command level for a queue manager is 701. What does it actually specify ?
WebSphere products use a "[version].[release].[modification].[Fix Pack]" naming convention. For example 7.0.1.6 is the current release specified down to the Fix Pack level.
Fix packs are limited to bug fixes and very limited non-disruptive functionality enhancements.
Modifications can include functionality enhancements but no API changes. For example the Multi-Instance Queue Manager was introduced in 7.0.1.
Releases may introduce significant new function and limited API changes but are highly forward and backward compatible withing the same version.
Versions encapsulate a core set of functionality. Changes at this level may sacrifice some backward compatibility in trade for significant new functionality. For example, WMQ Pub/Sub was moved from Message Broker to base MQ in the V7 release.
Since administrative functionality does not change in Fix Packs but may change at the Modification level, compatibility with administrative tools is based on the queue manager's Command Level.
There is an old but still useful TechNote which described this when the numbering conventions were adopted for WMQ.
It displays the major version number of WMQ - e.g. 530,600,700,701. Despite being 'only' a .0.1 increment, WMQ 7.0.1 gets a new major version number due to a number of internal changes (e.g. multi-instance QMs), although WMQ 6.0.1.x and 6.0.2.x were both CMDLEVEL 600
Command level, although similar to the V.R.M.F, it not exactly the same thing. The Command level is used to allow configuration applications to know what commands (and attributes within those commands) will be understood by the command server.
The first thing any configuration application should do is discover the PLATFORM and CMDLEVEL of the queue manager. Then that application can determine which commands/attributes it would be acceptable to send to that queue manager.
It is possible that CMDLEVEL could be increased in the service stream. Then the V.R.M.F. would not necessarily match the CMDLEVEL. This would happen if some new external attributes were added in the service stream, so queue managers without that patch would not understand them, but queue managers with the patch would. How does an application determine what to send? Well, the CMDLEVEL would determine that and so would have to be upped by the patch.

Resources