Connecting to MQ using CCDT JSON from Websphere - websphere

I have IBM MQ running in one docker container and in another container I have IBM Websphere running. From Websphere I am trying to create QCF using CCDT connection method. I have copied CCDT file inside /tmp folder of Websphere container, when I test the connection I get the error:
A connection could not be made to IBM MQ for the following reason: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2278' ('MQRC_CLIENT_CONN_ERROR').*
I am able to connect from MQ Explorer using the same CCDT file.
CCDT JSON sample used:
{
"channel":
[
{
"connectionManagement":
{
"sharingConversations": 10,
"defaultReconnect": "no",
"heartbeatInterval": 10,
"keepAliveInterval": -1
},
"general":
{
"description": "Client Channel Definition",
"maximumMessageLength": 104857600
},
"name": "CHANNEL1",
"clientConnection":
{
"connection":
[
{
"host": "IP",
"port": port
}
],
"queueManager": "QMNAME"
},
"type": "clientConnection"
}
]
}

JSON CCDT support was not added to IBM MQ until 9.2 LTS. You won't be able to use it with a 9.1.0.7 RA.
Your only options are to use a binary CCDT or add/installed the 9.2 RA (rar) for WAS to use instead of the builtin 9.1.0.7 RA.
9.2.0.4 is the latest and you can download the java-all package to obtain the rar file.

Related

problemin connecting apache superset running inside docker container to Kylin

I have a running apache-superset inside a docker container that i want to connect to a running apache-kylin (Not inside docker ).
I am recieving the following error whenever i test connection with this alchemy URI : 'kylin://ADMIN#KYLIN#local:7070/test ':
[SupersetError(message='(builtins.NoneType) None\n(Background on this error at: http://sqlalche.me/e/13/dbapi)', error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: 'GENERIC_DB_ENGINE_ERROR'>, level=<ErrorLevel.ERROR: 'error'>, extra={'engine_name': 'Apache Kylin', 'issue_codes': [{'code': 1002, 'message': 'Issue 1002 - The database returned an unexpected error.'}]})]
"POST /api/v1/database/test_connection HTTP/1.1" 422 -
superset_app | 2021-07-02 18:44:17,224:INFO:werkzeug:172.28.0.1 - - [02/Jul/2021 18:44:17] "POST /api/v1/database/test_connection HTTP/1.1" 422 -
You might need to check your superset_app network first.
use docker inspect [container name] i.e.
docker inspect superset_app
in my case, it is running in superset_default network
"Networks": {
"superset_default": {
.....
}
}
Next, you need to connect your kylin docker container to this network i.e.
docker network connect superset_default kylin
kylin is your container name.
Now, your superset_app and kylin container has been exposed within the same network. You can docker inspect your kylin container
docker inspect kylin
and find the IPAddress
"Networks": {
"bridge": {
....
},
"superset_default": {
...
"IPAddress": "172.18.0.5",
...
}
}
In superset you can now connect your kylin docker container
We have hosted Dremio and Superset on an AKS Cluster in Azure and we are trying to connect Superset to the Dremio Database(Lakehouse) for fetching some dashboards. We have installed all the required drivers(arrowflight, sqlalchemy_dremio and unixodc/dev) to establish the connection.
Strangely we are able not able to connect to Dremio from the Superset UI using the connection strings:
dremio+flight://admin:password#dremiohostname.westeurope.cloudapp.azure.com:32010/dremio
dremio://admin:adminpass#dremiohostname.westeurope.cloudapp.azure.com:31010/databaseschema.dataset/dremio?SSL=0
Here’s the error:
(builtins.NoneType) None\n(Background on this error at: https://sqlalche.me/e/14/dbapi)", "error_type": "GENERIC_DB_ENGINE_ERROR", "level": "error", "extra": {"engine_name": "Dremio", "issue_codes": [{"code": 1002, "message": "Issue 1002 - The database returned an unexpected error."}]}}]
However, while trying from inside the Superset pod, using this python script [here][1] 5, the connection goes through without any issues.
PS - One point to note is that, we have not enabled SSL certificates for our hostnames.

configure XADatasource in REDHAT JBoss EAP 7.0 for Mariadb

I wanted to configure XA datasource for Mariadb in REDHAT JBoss EAP 7.0. I
I have created a non-XA datasource with below details and connection is working fine.
Driver: mysql-connector-java-5.1.46.jar_com.mysql.jdbc.Driver_5_1
Connection URL: jdbc:mysql://localhost:3306/test
But when I tried to create new XA datasource for distributed transactions then it fails with error detail.
Unexpected HTTP response: 500
Request
{
"address" => [
("subsystem" => "datasources"),
("xa-data-source" => "MysqlXADS1")
],
"operation" => "test-connection-in-pool"
}
Response<br>
Internal Server Error<br>
{`enter code here`
"outcome" => "failed",
"failure-description" => "WFLYJCA0040: failed to invoke operation: WFLYJCA0042: failed to match
pool. Check JndiName: java:/MysqlXADS1",
"rolled-back" => true
}
Configuration Details :
Driver: mysql-connector-java-5.1.46.jar_com.mysql.jdbc.jdbc2.optional.MysqlXADataSource_5_1
Url: jdbc:mariadb://localhost:3306/test
Valid Connection Checker: org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker
Exception Sorter: org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter
JBoss EAP support for MariaDB started on version 7.0 so your version is not a problem.
From the error WFLYJCA0040: failed to invoke operation: WFLYJCA0042: failed to match pool. Check JndiName, and assuming the jdbc name is correctly assigned, I believe your issue will be solved on step 1.3 and 2, as below:
When setting up a data source on JBoss EAP 7.0:
deploy the driver
create the datasource
Verify the pool size, removing the line <max-pool-size>0</max-pool-size> as explained here.
reload
For your version, JBoss EAP 7.0 : Remember to enable the DB:
#/subsystem=datasources/data-source=MyExampleDS:enable()
#/subsystem=datasources/data- source=MyExampleDS:test-connection-in-pool()

kafka.common.KafkaException: Failed to parse the broker info from zookeeper from EC2 to elastic search

I have aws MSK set up and i am trying to sink records from MSK to elastic search.
I am able to push data into MSK into json format .
I want to sink to elastic search .
I am able to do all set up correctly .
This is what i have done on EC2 instance
wget /usr/local http://packages.confluent.io/archive/3.1/confluent-oss-3.1.2-2.11.tar.gz -P ~/Downloads/
tar -zxvf ~/Downloads/confluent-oss-3.1.2-2.11.tar.gz -C ~/Downloads/
sudo mv ~/Downloads/confluent-3.1.2 /usr/local/confluent
/usr/local/confluent/etc/kafka-connect-elasticsearch
After that i have modified kafka-connect-elasticsearch and set my elastic search url
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=AWSKafkaTutorialTopic
key.ignore=true
connection.url=https://search-abcdefg-risdfgdfgk-es-ex675zav7k6mmmqodfgdxxipg5cfsi.us-east-1.es.amazonaws.com
type.name=kafka-connect
The producer sends message like below fomrat
{
"data": {
"RequestID": 517082653,
"ContentTypeID": 9,
"OrgID": 16145,
"UserID": 4,
"PromotionStartDateTime": "2019-12-14T16:06:21Z",
"PromotionEndDateTime": "2019-12-14T16:16:04Z",
"SystemStartDatetime": "2019-12-14T16:17:45.507000000Z"
},
"metadata": {
"timestamp": "2019-12-29T10:37:31.502042Z",
"record-type": "data",
"operation": "insert",
"partition-key-type": "schema-table",
"schema-name": "dbo",
"table-name": "TRFSDIQueue"
}
}
I am little confused in how will the kafka connect start here ?
if yes how can i start that ?
I also have started schema registry like below which gave me error.
/usr/local/confluent/bin/schema-registry-start /usr/local/confluent/etc/schema-registry/schema-registry.properties
When i do that i get below error
[2019-12-29 13:49:17,861] ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
kafka.common.KafkaException: Failed to parse the broker info from zookeeper: {"listener_security_protocol_map":{"CLIENT":"PLAINTEXT","CLIENT_SECURE":"SSL","REPLICATION":"PLAINTEXT","REPLICATION_SECURE":"SSL"},"endpoints":["CLIENT:/
Please help .
As suggested in answer i upgraded the kafka connect version but then i started getting below error
ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:63)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:210)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:61)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:72)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:39)
at io.confluent.rest.Application.createServer(Application.java:201)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:41)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: Timed out trying to create or validate schema topic configuration
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:168)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:111)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:208)
... 5 more
Caused by: java.util.concurrent.TimeoutException
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:161)
... 7 more
First, Confluent Platform 3.1.2 is fairly old. I suggest you get the version that aligns with the Kafka version
You start Kafka Connect using the appropriate connect-* scripts and properties located under bin and etc/kafka folders
For example,
/usr/local/confluent/bin/connect-standalone \
/usr/local/confluent/etc/kafka/kafka-connect-standalone.properties \
/usr/local/confluent/etc/kafka-connect-elasticsearch/quickstart.properties
If that works, you can move onto using connect-distributed command instead
Regarding Schema Registry, you can search its Github issues for multiple people trying to get MSK to work, but the root issue is related to MSK not exposing a PLAINTEXT listener and the Schema Registry not supporting named listeners. (This may have changed since versions 5.x)
You could also try using Connect and Schema Registry containers in ECS / EKS rather than extracting in an EC2 machine

Google Cloud Storage Repository Plugin

I have a K8 cluster on GCP running elasticsearch. Now I need to create a backup.
I've installed the GCS-plugin on my pods in stateful-set and tried setting it up with the following documentation:
https://github.com/elastic/elasticsearch/blob/master/docs/plugins/repository-gcs.asciidoc
When I try to configure a repository to use credentials stored in keystore I get the following response back:
{
"error": {
"root_cause": [
{
"type": "repository_exception",
"reason": "[my_backup] repository type [gcs] does not exist"
}
],
"type": "repository_exception",
"reason": "[my_backup] repository type [gcs] does not exist"
},
"status": 500
}
Any lead would be helpful, thanks!
I think the problem is that I can't install the plugin on the nodes, so I’ve installed it on the pods instead. And that the installation is not persistent after I restart the pods. So to make the installation persist on K8 I needed to build a custom image that installs the plugin. A bit tricky, but the plugin seems to be intended for GCE. So I decided to move from K8 to a managed instance group on GCE instead.

kafka HDFS connector connecting to private IP instead of hostname in multi-DC setup

I have 2 clusters:
one in house with confluent (3.0.0-1)
one in AWS, with hadoop (hdp 2.4)
I am trying to use the hdfs connector to write from confluent to hadoop.
Long story short: the connector tries to connect to a private IP of the hadoop cluster instead of using the hostname. On the in-house cluster, /etc/hosts has been updated to resolve the internal hadoop hostnames to the relevant public IP.
I am using the distributed connector, I have a bunch of connector JSON files as follow:
{
"name": "sent-connector",
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "1",
"topics": "sent",
"topics.dir":"/kafka-connect/topics",
"logs.dir":"/kafka-connect/wal",
"hdfs.url": "hdfs://ambari:8020",
"hadoop.conf.dir": "/etc/hadoop/conf",
"hadoop.home": "/usr/hdp/current/hadoop-client",
"flush.size": "100",
"hive.integration":"true",
"hive.metastore.uris":"thrift://ambari:9083",
"hive.database":"events",
"hive.home": "/usr/hdp/current/hive-client",
"hive.conf.dir": "/etc/hive/conf",
"schema.compatibility":"FULL",
"partitioner.class": "io.confluent.connect.hdfs.partitioner.HourlyPartitioner",
"path.format": "'year'=YYYY/'month'=MM/'day'=dd/'hour'=HH/",
"locale": "C",
"timezone": "UTC",
"rotate.interval.ms": "2000"
}
and the worker is defined as such:
rest.port=8083
bootstrap.servers=<eth0 IP of the server>:9092
group.id=dp2hdfs
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=schemareg.dpe.webpower.io
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=schemareg.dpe.webpower.io
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
config.storage.topic=k2hdfs-configs
offset.storage.topic=k2hdfs-offsets
status.storage.topic=k2hdfs-statuses
debug=true
A few notes:
/kafka-connect exists on hdfs, world-writeable
The 3 topics (*.storage.topic) do exist
I have one worker running on each (3) servers with kafka broker (there is a schema registry, rest API and zookeeper server as well on all brokers)
I have set dfs.client.use.datanode.hostname to true, and this property is set up on the client in $HADOOP_HOME/hdfs-site.xml
I see that the subdirectories of /kafka-connect are created as well as hive metadata. When I start the connector, the message is:
INFO Exception in createBlockOutputStream (org.apache.hadoop.hdfs.DFSClient:1471)
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remot
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1610)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
INFO Abandoning BP-429601535-10.0.0.167-1471011443948:blk_1073742319_1495 (org.apache.hadoop.hdfs.DFSClient:1364)
INFO Excluding datanode 10.0.0.231:50010 (org.apache.hadoop.hdfs.DFSClient:1368)
[rince and repeat with other datanodes]
Any idea on how to fix this? It looks like confluent receives the IP directly, not a hostname.

Resources