List clusters in IBM ICP v3 - ibm-cloud-private

In IBM Cloud Private version 2, bx pr cluster command lists all the clusters in the account. What is the command to list the clusters in IBM Cloud Private version3?

IBM Cloud Private 3.x.x and onward uses cloudctl instead of bx pr for Command Line Interface (CLI). You may find documentation in the IBM Cloud Private Knowledge Center (KC) here (ICP 3.1.0 selected): https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.0/manage_cluster/icp_cli.html

Related

Spring Boot Cassandra : c.d.o.d.i.core.session.DefaultSession : [s0] Negotiated protocol version V5 instead of v4

Small question regarding Spring Boot and Cassandra please.
I am currently using Spring Boot 2.5.1, with its associated Cassandra connector: spring-boot-starter-data-cassandra also at 2.5.1, which I think in turn utilize the driver 4.11.1
c.d.o.d.i.core.DefaultMavenCoordinates : DataStax Java driver for Apache Cassandra(R) (com.datastax.oss:java-driver-core) version 4.11.1
Upon application start, I am observing a strange log which I do not understand:
c.d.o.d.i.core.session.DefaultSession : [s0] Negotiated protocol version V4 for the initial contact point, but cluster seems to support V5, keeping the negotiated version
It seems the cluster supports V5 (whatever that is) but my app is "doing negotiation" with V4.
May I ask how I can configure my application, to leverage this cassandra V5 please?
Thank you
The native protocol defines the format of the messages between the driver the Cassandra cluster over TCP connections. Java driver 4 supports protocol versions v3 (C* 2.1), v4 (C* 2.2, 3.x) and v5 (C* 4.0).
Since Cassandra 4.0 is not released yet, native protocol v5 support is still in beta so the Java driver automatically negotiates to protocol v4 (recommended).
You can set ProtocolVersion in CassandraClusterFactoryBean but it isn't something you need to worry about since the log entry you posted is informational only. Cheers!

How to connect to kafka installed using confluent helm chart

I have a kubernetes cluster hosted on azure cloud. I had installed kafka resources using below helm chart https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka. This helm chart successfully deployed zoopkeeper pods and broker pods etc. Now I want to write a golang based application which connects with any of the kafka broker installed on my kubernetes cluster and creates a new producer and publishes messages. Any help would be highly appreciated.
You can use the following string in bootstrap.servers to communicate with the brokers <helm-release-name>-cp-kafka-headless.<namespace>:9092 or bootstrap service which is created as part of confluent helm chart <helm-release-name>-cp-kafka. When you hit this service, it will randomly got to any of the brokers for the first time and get all the metadata information which is synced through zookeeper.
The subsequent requests will be made to individual brokers based on information found in metadata.
You would deploy your Golang code in a container, in k8s, then set bootstrap.servers to the Kafka Deployment's Service name, ideally via an environment variable

How to implement kafka-connect using apache-kaka instead of confluent

I would like to use an open source version of kafka-connect instead of the confluent one as it appears that confluent cli is not for production and only for dev. I would like to be able to listen to changes on mysql database on aws ec2. Can someone point me in the right direction.
Kafka Connect is part of Apache Kafka. Period. If you want to use Kafka Connect you can do so with any modern distribution of Apache Kafka.
You then need a connector plugin to use with Kafka Connect, specific to your source technology. For integrating with a database there are various considerations, and available for MySQL you specifically have:
Kafka Connect JDBC - see it in action here
Debezium - see it in action here
The Confluent CLI is just a tool for helping manage and deploy Confluent Platform on developer machines. Confluent Platform itself is widely used in production.

How to setup Corda Node for Production?

After checking some examples and tutorials I wonder if there is a guide how to start a productive Corda Node.
I would expect something like a docker(compose) with Message Server, DB and Web Server (Spring ???) to start the whole infrastructure which enables deployments of new CordApps or updates.
Anyone here who could share a e.g. Jenkins pipeline which could act as blueprint ?
Here are some general steps of deploying a Cordapp.
Install the Corda Node
Implement the Corda Firewall PKI
Generate Bridge and Float keystores for your the Artemis server (message queue)
You can take a look at this official guide for a detailed explanation.
And if you are working on Docker, there is a guide available now here.
For CI/CD, there is a few samples already been done with Jenkins on in the Corda github. Besides you can look at CircleCI for an alternative option

Does ibm-cloud-private support syndicate of catelog?

I see support statement for IBM Cloud Dedicated (Syndicated catalog) here -> https://console.bluemix.net/docs/dedicated/index.html#catalogdedicated.
Is it supported in ibm-cloud-private ?
No, IBM Cloud Private does not use the syndicated catalog from IBM Cloud Public and Dedicated because ICP uses a helm chart catalog, while IBM Cloud Public and Dedicated use the syndicated catalog serving up service broker access to services as well as container based services. ICP doesn't have to syndicate across public datacenters like IBM Cloud Public and Dedicated because ICP is running in the datacenter of your choice and might be air-gapped (no connection to internet). The syndicated catalog is built on the assumption of public datacenters.

Resources