glassfish v3 asadmin how to specify XA on connection factory - jms

This worked in GFV2:
$AS_HOME/bin/asadmin \
--host $AS_ADMIN_HOST \
--user $AS_ADMIN_USER \
--port $AS_ADMIN_PORT \
create-jms-resource \
--restype javax.jms.QueueConnectionFactory \
--description XA\ Queue\ Connection\ Factory \
--property Name=myXAQueueConnectionFactory:SupportsXA=true \
jms/myXAQueueConnectionFactory
But the SupportsXA=true no longer works. Maybe I can't find it in the GFV3 manuals, nor can I find it via our friend Google: how to specify XA transactionality using asadmin to configure the factory? Anybody out there know how?

--property ...:transaction-support=XATransaction:...
This seems to be what I needed. Works. Did not find by search of documentation or by Google. Deduced it by looking at the domain.xml file and taking educated guess at syntax.
Am now trying to figure out what property name/value pair sets the connection-validation property the way I want it.
Question has morphed to: what's the full asadmin syntax and property setting for GFV3 connection factories.

Related

Substrate Parsing mdns packet failed

I am currently doing this tutorial. And on the same machine it worked as expected: The nodes are connecting and are creating and finalizing blocks. But now I want to do the same over the internet. So I have a server (Ubuntu 16.04 xenial) with open port 30333 on which I am running this command:
./target/release/node-template \
--base-path /tmp/alice \
--chain ./customSpecRaw.json \
--alice \
--rpc-methods Unsafe \
--port 30333 \
--ws-port 9945 \
--rpc-port 9933 \
--node-key 0000000000000000000000000000000000000000000000000000000000000001 \
--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
--validator \
--name Node01
And my PC (Manjaro 20.2.1 Nibia) with no open ports on which I am running this command:
./target/release/node-template
--base-path /tmp/bob
--chain ./customSpecRaw.json
--bob
--port 30334
--ws-port 9946
--rpc-port 9934
--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0'
--validator
--rpc-methods Unsafe
--name Node02
--bootnodes /ip4/<SERVER IP>/tcp/30333/p2p/<BOOTNODE P2P ID>
In the terminal I see network traffic on both nodes so networking should not be the problem. But there are 0 peers on both nodes and there are no blocks created/finalized. But I am getting two errors on the bootnodes terminal printed repeatedly:
Error while dialing /dns/telemetry.polkadot.io/tcp/443/x-parity-wss/%2Fsubmit%2F: Custom { kind: Other, error: Timeout }
and
Parsing mdns packet failed: LabelIsNotAscii
Both errors are already output before I try to connect to the bootnode from my PC.
Both nodes are compiled from the same code and are using the same custom chain spec file generated on the server.
So my questions are:
What do the errors/warnings mean?
How to fix them in order to get the expected results?
If the errors/warnings are not causing the problem what else could it be?
I did reclone and recompile both nodes and somehow it's working now. I did not change anything in the command except the --no-mdns flag.

Errors in Karaf upgrade from 4.2.0.M1 to 4.2.0.M2

We were upgrading Karaf and in the transition from 4.2.0.M1 to 4.2.0.M2 we noticed several errors like this related to BootFeatures:
2021-02-04T15:43:17,674 | ERROR | activator-1-thread-2 | BootFeaturesInstaller | 11 - org.apache.karaf.features.core - 4.2.1 | Error installing boot features
org.apache.felix.resolver.reason.ReasonException: Unable to resolve root: missing requirement [root] osgi.identity; osgi.identity=ssh; type=karaf.feature; version="[4.3.1.SNAPSHOT,4.3.1.SNAPSHOT]"; filter:="(&(osgi.identity=ssh)(type=karaf.feature)(version>=4.3.1.SNAPSHOT)(version<=4.3.1.SNAPSHOT))" [caused by: Unable to resolve ssh/4.3.1.SNAPSHOT: missing requirement [ssh/4.3.1.SNAPSHOT] osgi.identity; osgi.identity=org.apache.karaf.shell.ssh; type=osgi.bundle; version="[4.3.1.SNAPSHOT,4.3.1.SNAPSHOT]"; resolution:=mandatory [caused by: Unable to resolve org.apache.karaf.shell.ssh/4.3.1.SNAPSHOT: missing requirement [org.apache.karaf.shell.ssh/4.3.1.SNAPSHOT] osgi.wiring.package; filter:="(&(osgi.wiring.package=org.apache.karaf.jaas.boot.principal)(version>=4.3.0)(!(version>=5.0.0)))" [caused by: Unable to resolve org.apache.karaf.jaas.boot/4.3.1.SNAPSHOT: missing requirement [org.apache.karaf.jaas.boot/4.3.1.SNAPSHOT] osgi.wiring.package; filter:="(&(osgi.wiring.package=org.osgi.framework)(version>=1.9.0)(!(version>=2.0.0)))"]]]
The error always looks similar although the name of the feature that gives the error is different every time (for example kar and ssh), so it seems that all the BootFeatures are failing and one at random just shows the error first. It seems as if something has changed from 4.2.0.M1 to 4.2.0.M2 in how Karaf features are managed.
We use Java 8 and OSGi 6. Besides that, we use Gradle as a build system and the Aether library (org.ops4j.pax.url.mvn) to handle Maven artifactory/packages resolution.
This is the content of our org.apache.karaf.features.cfg file:
featuresRepositories = \
mvn:org.apache.karaf.features/framework/4.2.0.M2/xml/features, \
mvn:org.apache.karaf.features/spring/4.2.0.M2/xml/features, \
mvn:org.apache.karaf.features/standard/4.2.0.M2/xml/features, \
mvn:org.apache.karaf.features/enterprise/4.2.0.M2/xml/features, \
mvn:org.apache.activemq/activemq-karaf/5.16.1/xml/features, \
mvn:org.apache.cxf.karaf/apache-cxf/3.2.7/xml/features, \
mvn:org.apache.cxf.dosgi/cxf-dosgi/2.3.0/xml/features, \
mvn:org.ops4j.pax.jdbc/pax-jdbc-features/1.4.5/xml/features, \
file:/opt/data/features/feature.xml
featuresBoot = \
(instance, \
package, \
log, \
ssh, \
aries-blueprint, \
framework, \
system, \
eventadmin, \
feature, \
shell, \
management, \
service, \
jaas, \
shell-compat, \
deployer, \
diagnostic, \
wrap, \
bundle, \
config, \
kar, \
jndi, \
jdbc, \
transaction, \
pax-jdbc-config, \
pax-jdbc-pool-common, \
pax-jdbc-postgresql, \
pax-jdbc-pool-c3p0, \
cxf-core, \
cxf-jaxrs, \
cxf-jaxws, \
cxf-dosgi-provider-rs, \
cxf-dosgi-provider-ws, \
activemq-broker-noweb), \
(local_bundle_1, ..., local_bundle_N)
featuresBootAsynchronous=false
Does anyone have any idea about what could be the cause of these errors after upgrading from 4.2.0.M1 to 4.2.0.M2?
Thanks in advance
Karaf is resolving the most current version of the ssh feature for you. To correct, add blacklist entries for the versions that are showing up in the log
file: etc/org.apache.karaf.features.xml
<blacklistedRepositories>
...
<repository>mvn:org.apache.karaf.features/standard/4.3.1-SNAPSHOT/xml/features</repository>
...
</blacklistedRepositories>

spark: class not found exception

I'm getting class not found exception when running the spark2-submit command in the console. Can anyone please suggest me what could be the error.
spark2-submit --class spark.FirstQuestion.SingleLookupFilter \
--master yarn --deploy-mode client \
--name aws_spark \
--conf "spark.app.id=spark_aws_run" \
FirstQuestion-0.0.1-SNAPSHOT.jar \
/user/ec2-user/spark_assignment/input/yellow_tripdata_* \
/user/ec2-user/spark_assignment/spark_output/single_row_lookup_SparkRDD
It does not complain about "class not found". It complains about class SingleLookupFilter not having the main(String[]) method, which makes the class a valid entry point to the program. Check if the method is there, if in doubt paste the code of the class here.

Spark 2.0: Relative path in absolute URI (spark-warehouse)

I'm trying to migrate from Spark 1.6.1 to Spark 2.0.0 and I am getting a weird error when trying to read a csv file into SparkSQL. Previously, when I would read a file from local disk in pyspark I would do:
Spark 1.6
df = sqlContext.read \
.format('com.databricks.spark.csv') \
.option('header', 'true') \
.load('file:///C:/path/to/my/file.csv', schema=mySchema)
In the latest release I think it should look like this:
Spark 2.0
spark = SparkSession.builder \
.master('local[*]') \
.appName('My App') \
.getOrCreate()
df = spark.read \
.format('csv') \
.option('header', 'true') \
.load('file:///C:/path/to/my/file.csv', schema=mySchema)
But I am getting this error no matter how many different ways I try to adjust the path:
IllegalArgumentException: 'java.net.URISyntaxException: Relative path in
absolute URI: file:/C:/path//to/my/file/spark-warehouse'
Not sure if this is just an issue with Windows or there is something I am missing. I was excited that the spark-csv package is now a part of Spark right out of the box, but I can't seem to get it to read any of my local files anymore. Any ideas?
I was able to do some digging around in the latest Spark documentation, and I notice they have a new configuration setting that I hadn't noticed before:
spark.sql.warehouse.dir
So I went ahead and added this setting when I set up my SparkSession:
spark = SparkSession.builder \
.master('local[*]') \
.appName('My App') \
.config('spark.sql.warehouse.dir', 'file:///C:/path/to/my/') \
.getOrCreate()
That seems to set the working directory, and then I can just feed my filename directly into the csv reader:
df = spark.read \
.format('csv') \
.option('header', 'true') \
.load('file.csv', schema=mySchema)
Once I set the spark warehouse, Spark was able to locate all of my files and my app finishes successfully now. The amazing thing is that it runs about 20 times faster than it did in Spark 1.6. So they really have done some very impressive work optimizing their SQL engine. Spark it up!

Authenticating a dockerized Spring Boot app against a dockerized & linked ActiveMQ

I'm starting my activemq container like so:
docker run -p 61616:61616 -p 8161:8161 --name='activemq' -d \
-e 'ACTIVEMQ_LOGLEVEL=DEBUG' \
-e 'ACTIVEMQ_ADMIN_USER=bot' \
-e 'ACTIVEMQ_ADMIN_PASSWORD=blahblah' \
-e 'ACTIVEMQ_OWNER_LOGIN=bot' \
-e 'ACTIVEMQ_OWNER_PASSWORD=blahblah' \
-e 'ACTIVEMQ_JMX_LOGIN=bot' \
-e 'ACTIVEMQ_JMX_PASSWORD=blahblah' \
-v /data/activemq:/data/activemq \
-v /var/log/activemq:/var/log/activemq \
webcenter/activemq:latest
The application.yml for my app has the following:
spring:
activemq:
broker-url: ${ACTIVEMQ_PORT_61616_TCP}
user: bot
password: blahblah
and I'm starting my app container like so:
docker run --name='myapp' \
-w /app/ -v /home/ubuntu/myapp/logs:/app/logs \
-v /home/ubuntu/myapp/config:/app/config \
-v /tmp:/tmp -p 4980:4980 \
--link activemq:activemq \
-d myapp
Note that I'm linking my app's container with the activemq container.
Finally, when I attempt to send a msg to activeMQ from my app (via a Camel route - but I don't think that's relevant), I see this in the logs:
javax.jms.JMSSecurityException: User name [bot] or password is invalid.
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:52)
at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1417)
at org.apache.activemq.ActiveMQConnection.ensureConnectionInfoSent(ActiveMQConnection.java:1522)
at org.apache.activemq.ActiveMQConnection.start(ActiveMQConnection.java:527)
at org.springframework.jms.listener.AbstractJmsListeningContainer.refreshSharedConnection(AbstractJmsListeningContainer.java:400)
at org.springframework.jms.listener.DefaultMessageListenerContainer.refreshConnectionUntilSuccessful(DefaultMessageListenerContainer.java:907)
at org.springframework.jms.listener.DefaultMessageListenerContainer.recoverAfterListenerSetupFailure(DefaultMessageListenerContainer.java:882)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1053)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.SecurityException: User name [bot] or password is invalid.
at org.apache.activemq.security.JaasAuthenticationBroker.addConnection(JaasAuthenticationBroker.java:80)
at org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:92)
at org.apache.activemq.broker.MutableBrokerFilter.addConnection(MutableBrokerFilter.java:97)
at org.apache.activemq.broker.TransportConnection.processAddConnection(TransportConnection.java:764)
at org.apache.activemq.broker.jmx.ManagedTransportConnection.processAddConnection(ManagedTransportConnection.java:79)
at org.apache.activemq.command.ConnectionInfo.visit(ConnectionInfo.java:139)
at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:294)
at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:148)
at org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
at org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
at org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
at org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:214)
at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:196)
... 1 common frames omitted
Caused by: javax.security.auth.login.LoginException: No LoginModules configured for activemq-domain
at javax.security.auth.login.LoginContext.init(LoginContext.java:264)
at javax.security.auth.login.LoginContext.<init>(LoginContext.java:417)
at org.apache.activemq.security.JaasAuthenticationBroker.addConnection(JaasAuthenticationBroker.java:72)
... 14 common frames omitted
Needless to say, it all 'just works' when running these components locally (i.e. not dockerized) on my dev machine. Just doesn't work inside docker containers on an EC2 instance.
Any and all help appreciated!
Thanks.
I have eventually got it to work by customising the docker image as follows:
Clone the git repo: git clone https://github.com/disaster37/activemq.git
Fix the URL passed to curl in this file: assets/setup/install
(the current url in that file no longer works)
Edit this file: assets/config/activemq.xml and replace the <jaasAuthenticationPlugin> and <authorizationPlugin> elements with this:
<simpleAuthenticationPlugin anonymousAccessAllowed="false">
<users>
<authenticationUser username="system" password="blubblub" groups="users,admins"/>
<authenticationUser username="bot" password="blahblah" groups="users"/>
</users>
</simpleAuthenticationPlugin>
Build my own docker image: docker build --tag="zackattack/activemq"
Run it: docker logs -f $(docker run --name='activemq' -d zackattack/activemq)
It's not pretty - but it works!
Judging by what I can glean from the debug logs. First, the debug log shows that 'bot' is getting across (the name), so, there does appear to be a connection between myapp and activemq. So the linkage is working. The clue for me is :
Caused by: javax.security.auth.login.LoginException: No LoginModules configured for activemq-domain
I don't know what a login module is, but, I would venture a guess that you need to tell activemq what to do with the name/password it receives? Looking at the description on the docker hub for activemq, there is an example:
-e 'ACTIVEMQ_READ_LOGIN=consumer_login' -e 'ACTIVEMQ_READ_PASSWORD=consumer_password'\
and
-e 'ACTIVEMQ_STATIC_TOPICS=topic1;topic2;topic3' -e 'ACTIVEMQ_STATIC_QUEUES=queue1;queue2;queue3'
Is it possible that either of these settings is needed to configure an activemq-domain login module? Or is that done in a configuration file somewhere? From a docker perspective I think everything is in order. I think there is a configuration issue with activemq. When this works in your regular environment, how do you start activemq (what are the command line arguments, environment variables, and configuration files)?

Resources