NiFi - connect to another instance (S2S) - apache-nifi

I'm trying to use the SiteToSiteProvenance Reporting Task.
The objective is to send provenance data between two dockerized instances of NiFi, one at port 8080 and another at port 9090.
I've created a input port creatively called "IN" on the destination NiFi and the service configuration on the source NiFi is:
However I'm getting the following error:
Unable to refresh Remote Group's peers due to Unable to communicate with remote NiFi cluster in order to determine which nodes exist in the remote cluster
I've also exposed the port 10000 in the destination docker.

As mentioned in the comments, it appears there was a networking issue between the containers.
It was finally resolved by the asker by not using containers.

Related

Secured NiFi won´t comunicate with NiFi Registry

I have standalone secured NiFi 1.12.1 in Docker running all fine. I am sucessfully using Site-To-Site remote processors, Site-To-Site forwarding of Nifi bulletins, calling NiFi API for self-monitoring and such things. I log in through certificate. So far all fine.
Problem crops up when I try to use NiFi Registry. I have access to two instances: secure and insecure.
No matter what exact format I specify (FQDN, just a name, with /nifi-registry or without), when I try to access (e.g. though importing a process group) the either NiFi Registry from NiFi, it fails with o.a.n.w.a.config.NiFiCoreExceptionMapper org.apache.nifi.web.NiFiCoreException: Unable to obtain listing of buckets: java.net.ConnectException: Connection refused (Connection refused). Returning Conflict response.. In logs is just this message with enormous stack-trace and nothing more.
I checked all certificates and they seem OK (certification path, certificate is for clientAuth as well as for serverAuth). I even use them to log into NiFi myself...
What surprises me the most is the fact, that it works for things like Site-To-Site protocols, API calls and such, but not for NiFi Registry.
Please don´t you know what might be a problem? Or any ideas what to check?
TL; DR:
Use IP addresses or edit /etc/hosts. Problem is in translation of hostname to IP address.
When I attempted to access the API of NiFi Registry directly from NiFi through InvokeHTTP, I noticed an important thing - nothing in different container responded to me (failed to connect to target):
#Safe NiFi - the one I am troubleshooting
https://<my FQDN>:8443/nifi-api/flow/registries
#Safe NiFi Registry (another container) - the one I am trying connect to
https://<my FQDN>:18443/nifi-registry-api/buckets
#Unsafe NiFi (another container) - just for test
http://<my FQDN>:28080/nifi-api/flow/registries
#Unsafe NiFi Registry (another container) - just for test
http://<my FQDN>:38080/nifi-registry-api/buckets
Then it dawned on me: To solve problem with Site-To-Site connections (discrepancy in name of container vs HTTPS certificate issued for hosting machine) I gave the container the same name as the hosting Docker machine. To verify I used IP addresses instead of FQDNs and it worked. Also checking /etc/hosts confirmed this - the FQDN pointed to IP address of container instead of Docker host.
Thus, one given FQDN was in container resolved as localhost and as Docker host everywhere else. And since on localhost on the NiFi Registry port(s) nothing listen ...
So as a solution either mangle the /etc/hosts to remove the offending line, or use IP addresses to force traversing through the Docker host.

Nomad and consul setup

Should I run consul slaves alongside nomad slaves or inside them?
The later might not make sense at all but I'm asking it just in case.
I brought my own nomad cluster up with consul slaves running alongside nomad slaves (inside worker nodes), my deployable artifacts are docker containers (java spring applications).
The issue with my current setup is that my applications can't access consul slaves (to read configurations) (none of 0.0.0.0, localhost, worker node ip worked)
Lets say my service exposes 8080, I configured docker part (in hcl file) to use bridge as network mode. Nomad maps 8080 to 43210.
Everything is fine until my service tries to reach the consul slave to read configuration. Ideally giving nomad worker node IP as consul host to Spring should suffice. But for some reason it's not.
I'm using latest version of nomad.
I configured my nomad slaves like https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/client1.hcl
And the link below shows how I configured/ran my consul slave:
https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml
Note: if I use static port mapping and host as the network mode for docker (in nomad) I'll be fine but then I can't deploy more than one instance of each application in each worker node (due to port conflic)
Nomad jobs listen on a specific host/port pair.
You might want to ssh into the server and run docker ps to see what host/port pair the job is listening on.
a93c5cb46a3e image-name bash 2 hours ago Up 2 hours 10.0.47.2:21435->8000/tcp, 10.0.47.2:21435->8000/udp foo-bar
Additionally, you will need to ensure that the consul nomad job is listening on port 0.0.0.0, or the specific ip of the machine. I believe that is this config value: https://www.consul.io/docs/agent/options.html#_bind
All those will need to match up in order to consul to be reachable.
More generally, I might recommend: if you're going to run consul with nomad, you might want to switch to host networking, so that you don't have to deal with the specifics of the networking within a container. Additionally, you could schedule consul as a system job so that it is automatically present on every host.
So I managed to solve the issue like this:
nomad.job.group.network.mode = host
nomad.job.group.network.port: port "http" {}
nomad.job.group.task.driver = docker
nomad.job.group.task.config.network_mode = host
nomad.job.group.task.config.ports = ["http"]
nomad.job.group.task.service.connect: connect { native = true }
nomad.job.group.task.env: SERVER_PORT= "${NOMAD_PORT_http}"
nomad.job.group.task.env: SPRING_CLOUD_CONSUL_HOST = "localhost"
nomad.job.group.task.env: SPRING_CLOUD_SERVICE_REGISTRY_AUTO_REGISTRATION_ENABLED = "false"
Running consul agent (slaves) using docker-compose alongside nomad agent (slave) with host as network mode + exposing all required ports.
Example of nomad job: https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/location-update-publisher.hcl
Example of consul agent config (docker-compose file): https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml
Disclaimer: The LAB is part of Cluster Visualization Framework called: LiteArch Trafik which I have created as an interesting exercise to understand Nomad and Consul.
It took me long time to shift my mind from K8S to Nomad and Consul,
Integration them was one of my effort I spent in the last year.
When service resolution doesn't work, I found out it's more or less the DNS configuration on servers.
There is a section for it on Hashicorp documentation called DNS Forwarding
Hashicorp DNS Forwarding
I have created a LAB which explains how to set up Nomad and Consul.
But you can use the LAB seperately.
I created the LAB after learning the hard way how to install the cluster and how to integrate Nomad and Consul.
With the LAB you need Ubuntu Multipass installed.
You execute one script and you will get full functional Cluster locally with three servers and three nodes.
It shows you as well how to install docker and integrate the services with Consul and DNS services on Ubuntu.
After running the LAB you will get the links to Nomad, Fabio, Consul.
Hopefully it will guide you through the learning process of Nomad and Consul
LAB: LAB
Trafik:Trafik Visualizer

The Jenkins tunnel address which I specify in the Jenkins->Configure Cloud does not seem to work. Can someone help me with the same?

I have a kubernetes cluster running on GKE and a Jenkins server running on a GCP instance.
I am using the Kubernetes plugin to dynamically create pods on the kubernetes cluster. I created a pipeline(Declarative syntax) for the same.
So I am aware that the Jenkins slave agents communicates with the Jenkins master on port 50000.
A snip of the configuration
But for some reason when I viewed the logs for the JNLP container creates by Jenkins, I received an exception - tcpSlaveAgentListener not found.
A snip of the container log
According to the above image, I assume the tunneling is unsuccessful as it is trying to connect to http://34.90.46.204:8080/tcpSlaveAgentListener/ whereas it should connect to http://34.90.46.204:50000/tcpSlaveAgentListener/.
It was a lazy question for me to ask, but I solved the issue.
In the Manage Jenkins-> Configure Global Security settings:
For the option on setting a port for TCP inbound agents: unselect the disable option which is selected by default and then provide a port for the inbound agents to interact on (50000).
A snip of the configuration
Jenkins uses a TCP port to communicate with agents connected inbound. If you're going to use inbound agents, you can allow the system to randomly select a port at launch (this avoids interfering with other programs, including other Jenkins instances). As it's hard for firewalls to secure a random port, you can instead specify a fixed port number and configure your firewall accordingly.
Hope this helps someone.

Not able to connect to Kafka on AWS EC2

I created an Ubuntu VM on AWS EC2 and in this same VM I'm running one instance of Zookeeper and one instance of Kafka. The Zookeeper and Kafka are running just fine, I was even able to create a topic, however, when I tried to connect from my local machine (macOS) from the terminal I get this message:
Producer clientId=console-producer] Connection to node -1 (ec2-x-x-x-x.ap-southeast-2.compute.amazonaws.com/x.x.x.x:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Inside /config/server.properties I changed the properties listeners and advertised.listeners (see image below) as I read in many topics related to my issue, but still no way of being able to connect Kafka on EC2 from my local machine:
I really don't know what I'm missing here...
Kafka version: kafka_2.12-2.2.1
listeners=PLAINTEXT://PRIVATE_IP_ADDRESS:9092
advertised.listeners=PLAINTEXT://PUBLIC_IP_ADDRESS:9092
After almost 3 days of struggling I was able to find out the problem. In case someone also has the same issue, I solved it by configuring the Security Group on AWS and adding the port 9092 which is the port where Kafka is running by default.

Why I cannot connect to Kafka from outside?

I am running kafka on ec2 instance. So amazon ec2 instance has two ips one is internal ip and second one is for external use.
I created producer from local machine, but it redirect to internal ip and give me connection unsuccessful error. Can anybody help me to configure kafka on ec2 instance, so that I can run producer from local machine. I am tried many combinations but didn't work.
In the Kafka FAQ (updated for new properties) you can read:
When a broker starts up, it registers its ip/port in ZK. You need to make sure the registered ip is consistent with what's listed in bootstrap.servers in the producer config. By default, the registered ip is given by InetAddress.getLocalHost.getHostAddress(). Typically, this should return the real ip of the host. However, sometimes (e.g., in EC2), the returned ip is an internal one and can't be connected to from outside. The solution is to explicitly set the host ip and port to be registered in ZK by setting the advertised.listeners property in server.properties.
I solved this problem, by setting advertised.host.name in server.properties and metadata.broker.list in producer.properties to public IP address and host.name to 0.0.0.0.
The easiest way how to reach your Kafka server (version kafka_2.11-1.0.0) on EC2 from consumer in external network is to change the properties file
kafka_2.11-1.0.0/config/server.properties
And modify the following line
listeners=PLAINTEXT://ec2-XXX-XXX-XXX-XXX.eu-central-1.compute.amazonaws.com:9092
Using your public address
Verified on 2.11-2.0.0
I just did this in AWS. First get the Kafka server to listen on the correct interface/IP using host.name. For your case this would be the internal IP, not localhost, since your intent is for outside Kafka clients to connect. Any local clients will need to use that same address, not localhost.
Then set advertised.host.name to a host name, not an IP address. The trick is to get that host name to always resolve to the correct IP for both internal and external machines. I use /etc/hosts inside and DNS outside. See my full answer about Kafka and name resolution here.
If you want to access from LAN, change following 2 files-
In config/server.properties:
advertised.listeners=PLAINTEXT://server.ip.in.lan:9092
In config/producer.properties:
bootstrap.servers=server.ip.in.lan:9092
In my case, the server.ip.in.lan value was 192.168.15.150
Below are the steps to connect Kafka from outside of EC2 instance.
Open Kafka server properties file on EC2.
/kafka_2.11-2.0.0/config/server.properties
Set the value of advertised.listeners to
advertised.listeners=PLAINTEXT://ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com:9092
This should be your Public DNS (IPv4) of EC2 instance.
Stop Kafka server.
Start Kafka server to see above configuration changes in action.
Now you can connect to your Kafka of EC2 instance from outside or from your localhost.
Tried and tested on kafka_2.11-2.0.0
SSH to your EC2 instance or wheverver you're hosting Kafka.
sudo nano /etc/hosts
Add:
127.0.0.1 <your-host-name> localhost
In my case it's:
127.0.0.1 ec2-12-34-56-78.ap-southeast-1.compute.amazonaws.com
Save and exit.
For EC2 you should edit the /etc/hosts file to add:
XXX.XXX.XXX.XXX ip-YYY-YYY-YYY-YYY
where XXX... is your external IP and the ip-YYY-YYY-YYY-YYY is the string returned by the hostname command. You can use 127.0.0.1 instead of your external IP to communicate inside the server.
host.name is deprecated - as are advertised.host.name and advertised.port

Resources