Issue in connecting kafka from outside - hadoop

I am using hortonwork Sandbox for kafka server
trying to connect kafka from eclipse with java code .
Use this configuration to connect to producer to send the message
metadata.broker.list=sandbox.hortonworks.com:45000
serializer.class=kafka.serializer.DefaultEncoder
zk.connect=sandbox.hortonworks.com:2181
request.required.acks=0
producer.type=sync
where sandbox.hortonworks.com is sandboxname to whom i connect
in kafka server.properties I changed this configuration
host.name=sandbox.hortonworks.com
advertised.host.name=System IP(on which my eclipse is running)
advertised.port=45000
did the port forwarding also ,
I am able to connect to kafka server from eclipse but while sending the message get the exception
Exception"Failed to send messages after 3 tries."

First make sure you have configured host-only network for your Hortonworks Sandbox VM as described here:
http://hortonworks.com/community/forums/topic/use-host-only-networking-for-the-virtual-machine/
After doing this your sandbox VM should get a IP (e.g. 192.168.56.101) and it should be reachable from your host via SSH like
$ ssh root#192.168.56.101
Then open Ambari at http://192.168.56.101:8080/ and change the Kafka configuration to
listeners=PLAINTEXT://0.0.0.0:6667
advertised.listeners=PLAINTEXT://192.168.56.101:6667
The latter property must be added in the section "Custom kafka-broker" (See also http://hortonworks.com/community/forums/topic/ambari-alerts-how-to-change-kafka-port/).
Then start/restart Kafka via Ambari. You should now be able to access Kafka from outside the Hortonworks Sandbox VM. You can test this (from outside of the sandbox VM) using e.g. the Kafka console producer from the Kafka distribution like
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
$ bin/kafka-console-producer.sh --topic test --broker-list 192.168.56.101:6667

After almost one week of config tweeking, I finally get it work. Similar to asmaier's answer but if you are using cloud server like me: Azure Sandbox-hdp, try to pub/sub thru remote consumer/producer.
In azure:
first SSH into your azure
in Ambari Web-UI localhost:8080, add
listeners=PLAINTEXT://0.0.0.0:6667
advertised.listeners=PLAINTEXT://127.0.0.1:6667
in terminal #root, Set up docker port forwarding like hortonwroks instruction page sandbox
vi start_scripts/start_sandbox.sh
add port 6667 on the list
On your PC:
first SSH into your azure plus tunneling 6667.
then write in cmd: or your run own java/c# script
kafka\bin\windows>kafka-console-producer --broker-list localhost:9092 --topic test
Only thing that bothers me right now is I cant find a way to make kafka push messages directly to a real public IP/ Azure. It seems Data traffic thru broker can only operate inside the docker internally.

Related

Unable to connect to Mosquitto broker running on a Windows EC2 Instance from outside the EC2 Instance [duplicate]

I have a virtual machine that is supposed to be the host, which can receive and send data. The first picture is the error that I'm getting on my main machine (from which I'm trying to send data from). The second picture is the mosquitto log on my virtual machine. Also I'm using the default config, which as far as I know can't cause these problems, at least from what I have seen from other examples. I have very little understanding on how all of this works, so any help is appreciated.
What I have tried on the host machine:
Disabling Windows defender
Adding firewall rules for "mosquitto.exe"
Installing mosquitto on a linux machine
Starting with the release of Mosquitto version 2.0.0 (you are running v2.0.2) the default config will only bind to localhost as a move to a more secure default posture.
If you want to be able to access the broker from other machines you will need to explicitly edit the config files to either add a new listener that binds to the external IP address (or 0.0.0.0) or add a bind entry for the default listener.
By default it will also only allow anonymous connections (without username/password) from localhost, to allow anonymous from remote add:
allow_anonymous true
More details can be found in the 2.0 release notes here
You have to run with
mosquitto -c mosquitto.conf
mosquitto.conf, which exists in the folder same with execution file exists (C:\Program Files\mosquitto etc.), have to include following line.
listener 1883 ip_address_of_the_machine(192.168.1.1 etc.)
By default, the Mosquitto broker will only accept connections from clients on the local machine (the server hosting the broker).
Therefore, a custom configuration needs to be used with your instance of Mosquitto in order to accept connections from remote clients.
On your Windows machine, run a text editor as administrator and paste the following text:
listener 1883
allow_anonymous true
This creates a listener on port 1883 and allows anonymous connections. By default the number of connections is infinite. Save the file to "C:\Program Files\Mosquitto" using a file name with the ".conf" extension such as "your_conf_file.conf".
Open a terminal window and navigate to the mosquitto directory. Run the following command:
mosquitto -v -c your_conf_file.conf
where
-c : specify the broker config file.
-v : verbose mode - enable all logging types. This overrides
any logging options given in the config file.
I found I had to add, not only bind_address ip_address but also had to set allow_anonymous true before devices could connect successfully to MQTT. Of course I understand that a better option would be to set user and password on each device. But that's a next step after everything actually works in the minimum configuration.
For those who use mosquitto with homebrew on Mac.
Adding these two lines to /opt/homebrew/Cellar/mosquitto/2.0.15/etc/mosquitto/mosquitto.conf fixed my issue.
allow_anonymous true
listener 1883
you can run it with the included 'no-auth' config file like so:
mosquitto -c /mosquitto-no-auth.conf
I had the same problem while running it inside docker container (generated with docker-compose).
In docker-compose.yml file this is done with:
command: mosquitto -c /mosquitto-no-auth.conf

Configuring Debezium MySQL connector via env vars

The only way to configure a Debezium connector (MySQL in my case) is to send a config to a running Kafka Connect instance via HTTP.
My question is: is it possible to supply this configuration when starting the Connect instance? Via a properties file or (ideally) via env vars?..
If you execute a connector worker in standalone mode, you can supply configuration via command line (see details here):
bin/connect-standalone worker.properties connector1.properties [connector2.properties connector3.properties ...]
For distributed mode, you can only use the REST API. But you can do some automation using tools like Ansible.

Cannot produce events to Confluent Kafka deployed on AWS EC2 from local machine

I'm trying to connect from an external client (my laptop) to a broker in a Kafka cluster that I have running on ec2 machines. When I try and connect from my local machine I get the following error:
$ ./kafka-console-producer --broker-list AWS.PRIV.ATE.IP:9092 --topic test
>hi
>[2018-09-20 13:28:53,952] ERROR Error when sending message to topic test with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1519 ms has passed since batch creation plus linger time
The topic exists because if I run (from local machine)
$ ./kafka-topics --list --zookeeper AWS.PRIV.ATE.IP:2181
__confluent.support.metrics
__consumer_offsets
_schemas
connect-configs
connect-offsets
connect-status
test
The cluster configuration is from Confluent's AWS quickstart template: https://github.com/aws-quickstart/quickstart-confluent-kafka/blob/master/templates/confluent-kafka.template and I'm running the open source version.
The three broker ec2 instances are visible to my local machine, which I verified by stopping the Kafka broker, starting a simple HTTP server on port 9092, and successfully curling that server using the internal IP address of the ec2 instance.
If I ssh into one of the broker instances I can successfully produce and consume messages across the cluster. The only update I've made to the out-of-the-box configuration provided by the template is changing listeners=PLAINTEXT://ec2-AWS-PUB-LIC-IP.compute-1.amazonaws.com:9092 in server.properties on each machine and then restarted the kafka server.
I can provide more configuration or debugging info if necessary. Believe the issue is something regarding IP address discoverability/visibility but I'm not entirely sure what.
You need to set advertised.listeners too.
See https://rmoff.net/2018/08/02/kafka-listeners-explained/ for details.

How to change the "kafka connect" component port?

On port 8083 I am running Influxdb for which I am even getting the GUI on http://localhost:8083
Now come to kafka, Here I am referring the setup as per https://kafka.apache.org/quickstart
starting the zookeeeper which is in folder /opt/zookeeper-3.4.10 by the command: bin/zkServer.sh start
So zookeeper is started now starting kafka under /opt/kafka_2.11-1.1.0 folder as :
bin/kafka-server-start.sh config/server.properties
create a topic named "test" with a single partition and only one replica:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Topic is created and can be checked in with command :
bin/kafka-topics.sh --list --zookeeper localhost:2181
Uptill here everything is fine and tuned.
Now I need to use "Kafka connect" component to import/export data.
So I am creating a seed data as: echo -e "foo\nbar" > test.txt
Now using connector configuration for "kafka connect" to work :
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
after running above command I am getting : Address already in use
Kafka connect has stopped
I even changed the rest.port=8084 in the /opt/kafka_2.11-1.1.0/config/connect-distributed.properties so as it don't get conflict with influxdb which already on 8083. Still I am getting the same Address already in use,
Kafka connect has stopped as shown in screenshots.
Since you're using Kafka Connect in Standalone mode, you need to change the REST port in config/connect-standalone.properties:
rest.port=18083
To understand more about Standalone vs Distributed you can read the doc here.
Kafka Standalone mode, uses Port 8084 as the Rest API post, by default. Due to this reason, if someone else is using that port already, the process with throw a BindException.
To change the port used above, navigate to the config/connect-standalone.properties file in the Kafka Root directory.
Add the following key value property to change the Port being used for Rest API opening. (Kafka should have included this in the properties file by default, else many developers go nuts trying to find the port mapping used in the standalone mode). Put a different port as you wish.
rest.port=11133
Kafka 3.0.0
Since Kafka Connect is intended to be run as a service, it also provides a REST API for managing connectors. The REST API server can be configured using the listeners configuration option. This field should contain a list of listeners in the following format: protocol://host:port,protocol2://host2:port2. Currently supported protocols are http and https.
For example: listeners= http://localhost:8080,https://localhost:8443
By default, if no listeners are specified, the REST server runs on port 8083 using the HTTP protocol.
More details: https://kafka.apache.org/documentation/#connect_rest
Change the port definition in config/server.properties:
# The port the socket server listens on
port=9092

Deploy API REST IBM Hyperledger Composer Blockchain

I'm developing a POC over IBM HyperLedger Blockchain. I have a business network developed and deployed in IBM Cloud. I can generate a working local API REST, but cannot make it work on cloud, on the deployed IP.
I'm following this guide:
https://ibm-blockchain.github.io/interacting/
You just have to execute the following command:
./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME
But it doesn't deploy anything, and get the following (more related to kubernetes than blockchain).
Preparing yaml file for create composer-rest-server
Creating composer-rest-server pod
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
the server doesn't have a resource type "svc"
Creating composer-rest-server service
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server-services-free.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Composer rest server created successfully
Any ideas? Thanks too much.
You need to ensure you have a correct kube config setup. Step 10 in https://ibm-blockchain.github.io/setup/ provides the details to set up KUBECONFIG as the error suggests that either it is not configured or not configured correctly.
The document you refer to https://ibm-blockchain.github.io/interacting/ is being updated and should be available soon.
When you run the command ./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME - should be the name of the Network Admin for the network you deployed, NOT the PeerAdmin card so it will be something like ./create/create_composer-rest-server.sh --business-network-card admin#perishable-network
Look like it's an issue of acceess control. You should make sure again you are running with Local Admin configuration.it will help you to run queries

Resources