Confluent Platform error while starting zookeeper - Classpath is empty - windows

I downloaded Confluent Platform in my local windows machine & tried to start zookeeper, but it is giving me below error:
c:\confluent>.\bin\windows\zookeeper-server-start.bat .\etc\kafka\zookeeper.prop
erties
Classpath is empty. Please build the project first e.g. by running 'gradlew jarA
ll'

Confluent does not test their products on Windows, last I heard.
The recommendation is to install WSL or use the Confluent Docker containers.

Related

Confluent Kafka Connect Elasticsearch connector installation

I'm trying to install Elasticsearch connector to Confluent Kafka Connect. I'm following below instruction:
https://docs.confluent.io/kafka-connect-elasticsearch/current/index.html#install-the-connector-using-c-hub
after executing:
confluent-hub install confluentinc/kafka-connect-elasticsearch:latest
everything seems fine. See below result:
[ec2-user#ip-172-31-16-76 confluent-6.1.0]$ confluent-hub install confluentinc/kafka-connect-elasticsearch:latest
The component can be installed in any of the following Confluent Platform installations:
1. /home/ec2-user/confluent-6.1.0 (based on $CONFLUENT_HOME)
2. /home/ec2-user/confluent-6.1.0 (found in the current directory)
3. /home/ec2-user/confluent-6.1.0 (where this tool is installed)
Choose one of these to continue the installation (1-3): 2
Do you want to install this into /home/ec2-user/confluent-6.1.0/share/confluent-hub-components? (yN) y
Component's license:
Confluent Community License
http://www.confluent.io/confluent-community-license
I agree to the software license agreement (yN) y
Downloading component Kafka Connect Elasticsearch 11.0.3, provided by Confluent, Inc. from Confluent Hub and installing into /home/ec2-user/confluent-6.1.0/share/confluent-hub-components
Do you want to uninstall existing version 11.0.3? (yN) y
Detected Worker's configs:
1. Standard: /home/ec2-user/confluent-6.1.0/etc/kafka/connect-distributed.properties
2. Standard: /home/ec2-user/confluent-6.1.0/etc/kafka/connect-standalone.properties
3. Standard: /home/ec2-user/confluent-6.1.0/etc/schema-registry/connect-avro-distributed.properties
4. Standard: /home/ec2-user/confluent-6.1.0/etc/schema-registry/connect-avro-standalone.properties
5. Based on CONFLUENT_CURRENT: /tmp/confluent.424339/connect/connect.properties
6. Used by Connect process with PID : /tmp/confluent.424339/connect/connect.properties
Do you want to update all detected configs? (yN) y
Adding installation directory to plugin path in the following files:
/home/ec2-user/confluent-6.1.0/etc/kafka/connect-distributed.properties
/home/ec2-user/confluent-6.1.0/etc/kafka/connect-standalone.properties
/home/ec2-user/confluent-6.1.0/etc/schema-registry/connect-avro-distributed.properties
/home/ec2-user/confluent-6.1.0/etc/schema-registry/connect-avro-standalone.properties
/tmp/confluent.424339/connect/connect.properties
/tmp/confluent.424339/connect/connect.properties
Completed
However, when I'm trying to list all avilable connectors I'm getting below list:
[ec2-user#ip-172-31-16-76 confluent-6.1.0]$ confluent local services connect connector list
The local commands are intended for a single-node development environment only,
NOT for production usage. https://docs.confluent.io/current/cli/index.html
Bundled Connectors:
file-sink
file-source
replicator
As per instruction in link above I would expect to see elasticsearch-sink . Unofrtunetly, no such entry avilable.
It seems I'm missing something simple but I don't see any explenation in instruction. Any help would be appreciated.
EDIT 1
Below you can see result of curl -s localhost:8083/connector-plugins
[
{"class":"io.confluent.connect.elasticsearch.ElasticsearchSinkConnector","type":"sink","version":"11.0.3"},
{"class":"io.confluent.connect.replicator.ReplicatorSourceConnector","type":"source","version":"6.1.0"},
{"class":"io.confluent.kafka.connect.datagen.DatagenConnector","type":"source","version":"null"},
{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","version":"6.1.0-ce"},
{"class":"org.apache.kafka.connect.file.FileStreamSourceConnector","type":"source","version":"6.1.0-ce"},
{"class":"org.apache.kafka.connect.mirror.MirrorCheckpointConnector","type":"source","version":"1"},
{"class":"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector","type":"source","version":"1"},
{"class":"org.apache.kafka.connect.mirror.MirrorSourceConnector","type":"source","version":"1"}
]
curl -s localhost:8083/connector-plugins gives the definitive response from the worker what plugins are installed.
Per the output in your question, the Elasticsearch sink connector is now installed in your connector. I don't know why the Confluent CLI would not show this.

Path for Flink state.checkpoints.dir in docker-compose in Windows 10 environment

I have Windows 10 OS, docker-compose and want to work with Apache Flink tutorial Playground, docker-compose starting correctly starting docker-compose but after several minutes of work,
Apache Flink has to create checkpoints, but there is some problem with access to the file system.
Exception:
org.apache.flink.runtime.checkpoint.CheckpointException: Could not finalize the pending checkpoint 104. Failure reason: Failure to finalize checkpoint.
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1216) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
…..
Caused by: org.apache.flink.util.SerializedThrowable: Mkdirs failed to create file:/tmp/flink-checkpoints-directory/d73c2f87b0d7ea6748a1913ee4b50afe/chk-104
at org.apache.flink.core.fs.local.LocalFileSystem.create(LocalFileSystem.java:262) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
Could you please help me with the correct path and docker access?
state.backend: filesystem
state.checkpoints.dir: file:///tmp/flink-checkpoints-directory
state.savepoints.dir: file:///tmp/flink-savepoints-directory
Also I tried use full Windows path but got the same error.
Are you using Windows Docker containers or Linux Docker containers ?
Right-click on the Docker Desktop icon to see your current configuration.
Switch to Windows containers
OR
Switch to Linux containers
You have to configure Flink paths according to you target Docker container type.
Note: You cannot use Windows and Linux containers at the same time.

how to install confluent in AWS EC2

I wanted to use confluent in AWS EC2 environment. How can I install it. I have tried the confluent cli in my local and want to replicate this feature of connecting sql to kafka. Is there any documentation on this?
You can find how to install Confluent Platform from DEB or YUM sources in Confluent docs. Otherwise, extract the same package you would have done locally.
There's AWS quickstart templates or Ansible setups on Confluent Github for setting up a full cluster. Or you could use EKS to run it in Kubernetes, if that's something you're comfortable with. I'm sure there's some third party Terraform repos out there as well...
For non-container, production use cases, you'd use systemctl to start services on independently running servers, not all Confluent services on just one system like with confluent start
Sounds like you just want to run KSQL, but it's not clear if/where you have a running Kafka cluster
you just need to download confluent zip from here
https://www.confluent.io/download/
unzip in your desired folder
for start confluent services
go to confluent bin /path to extract folder/confluent/bin
for start all confluent service
confluent start
for check service status
confluent status
for stop service
confluent stop

What docker images does DCOS Flink package require?

I have built a DCOS local universe and installed it into a cluster behind a firewall - there is no internet access to the cluster. One of the packages installed in the universe is Flink. I have installed DCOS using the cluster_docker_registry_url variable pointing at a local Docker registry which has a very small number of packages on it; it is not a mirror of the main Docker Hub.
When I try to install the Flink package into DCOS, I get 404 errors in the Mesos logs relating to missing docker images that I assume the package tries to download from the local Docker registry. The Flink cluster fails to start.
What Docker images does the Flink package try to download? I thought the build process of a local universe pulled all dependencies down when it is built, so there should be no external dependencies once it's built? What do I need to do to be able to install DCOS when there is no internet access?
That depends on the scala version you are using:
scala 2.10: mesosphere/dcos-flink:1.2.0-1.4
scala 2.11: mesosphere/dcos-flink-2-11:1.2.0-1.4
See here
Furthermore, it requires
openjdk:8-jre ,see here
For more details feel free to refer to the universe specification for the Apache Flink service (or ping me directly):
https://github.com/mesosphere/universe/blob/version-3.x/repo/packages/F/flink/1/

How can I upload a jar file for jdbc connection to workload scheduler service agent in Bluemix?

The docs for the Workload Scheduler for Node.js says:
"Important: Before running a database step, download and install the
JDBC database client driver on the agent where you want to run the
step. Specify the client jar file path in the JDBC jar class path."
How can I download and install the necessary JAR files to the agent? I see from this question that they should be installed at /home/wauser/utils, but I cannot figure out how to access the agent to install.
I tried an FTP step to move the file to the agent, but it was also unsuccessful.
From your description I assume your are trying to run SQL steps on the xx_CLOUD agent.
Where is the database running and which type of database is it? is it on Bluemix or somewhere else?
Currently the best way to run SQL steps is to use a workload scheduler agent installed on a VM or a docker image, so that you can easily install the jdbc jar files.

Resources