Hazelcast AWS EC2 Autodiscovery not working - amazon-ec2

I am using hazelcast-all-3.5.4.jar in my project.
I tried to run two instance of my application in one EC2 instance and two are joining the cluster with <aws enabled="true">.
I created another EC2 instance (using AMI option, same settings). When i run the same application as in the other node, the new node is not joining the cluster.
The INFO message Could not connect to: <ipaddress>. Reason: SocketException[Connection timed out to address is getting printed though.
It is finding the other node though
Connecting to <ipaddress>, timeout: 0, bind-any: true
Following messages are also displayed
INFO <ipaddress> Creating AWSJoiner
com.hazelcast.core.LifecycleService
com.hazelcast.cluster.impl.TcpIpJoinerOverAWS
com.hazelcast.nio.tcp.SocketConnector
Any idea what could be the reason?

Related

Intermittent connectivity issue with Windows containers in EKS

I have an EKS cluster running with both Linux and Windows nodes. On the Windows nodes i am scheduling pods. They run for about 30 minutes and then get removed. The first thing any pod does is download some data from S3 using the AWS cli installed on it.
I am facing some intermittent connectivity issues. Pods get spun up on and sometimes give a fatal error:
Could not connect to the endpoint URL: "https://sts.eu-west-1.amazonaws.com
As far as i can see this only happens when I schedule more then one pod on a node. I do use a smaller instance type (M5.large) but i am not close to reaching the pod limit of this instance type. When there is 1 pod per node they can all connect and download data from S3.
Reading the documentation I can see it is possible to schedule more then 1 pod per EC2 instance. But I am unsure what the requirements are to the EC2 instance to give all those pods access to download data from S3. I did try to add more ENIs to the EC2 instances but this prevented the EC2 instances to be registered as nodes in the EKS cluster.

Not able to connect to Kafka on AWS EC2

I created an Ubuntu VM on AWS EC2 and in this same VM I'm running one instance of Zookeeper and one instance of Kafka. The Zookeeper and Kafka are running just fine, I was even able to create a topic, however, when I tried to connect from my local machine (macOS) from the terminal I get this message:
Producer clientId=console-producer] Connection to node -1 (ec2-x-x-x-x.ap-southeast-2.compute.amazonaws.com/x.x.x.x:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Inside /config/server.properties I changed the properties listeners and advertised.listeners (see image below) as I read in many topics related to my issue, but still no way of being able to connect Kafka on EC2 from my local machine:
I really don't know what I'm missing here...
Kafka version: kafka_2.12-2.2.1
listeners=PLAINTEXT://PRIVATE_IP_ADDRESS:9092
advertised.listeners=PLAINTEXT://PUBLIC_IP_ADDRESS:9092
After almost 3 days of struggling I was able to find out the problem. In case someone also has the same issue, I solved it by configuring the Security Group on AWS and adding the port 9092 which is the port where Kafka is running by default.

How can I connect to AWS Documentdb with Robo 3T?

Using the latest Robo 3T and the command line provided by AWS
mongodb://<dbname>:<insertYourPassword>#example-db.cluster-c2e1234stuff0e.eu-west-2.docdb.amazonaws.com:27017
I get this Error:
Reason:
SSL tunnel failure: Network is unreachable or SSL connection rejected by server.
Reason: Connect failed
I have also tried following THIS walkthrough but had no joy.
I have read that it is possible to SSH to a EC2 instance on the same VPC and access documentdb this way but ideally I would like to access it directly and not pay for an extra EC2 instance. If I have that right?
I have tried via Mongo shell too and get the following response:
Error: couldn't connect to server example-db.cluster-c2eblahblaho0e.eu-west-2.docdb.amazonaws.com:27017, connection attempt failed: NetworkTimeout: Error connecting to example-db.cluster-c2eblahblaho0e.eu-west-2.docdb.amazonaws.com:27017 (<IP address>) :: caused by :: Socket operation timed out :
connect#src/mongo/shell/mongo.js:344:17
#(connect):2:6
exception: connect failed
What I suspect is happening is that either you do not have an EC2 instance in the same VPC as your DocumentDB cluster or that EC2 instance is not reachable from your laptop. I'd first connect to the EC2 instance with SSH to establish connectivity and then use that EC2 instance to SSH proxy from Robo3T.
For context, Amazon DocumentDB clusters deployed within a VPC can be accessed directly by EC2 instances or other AWS services that are deployed in the same VPC. Additionally, Amazon DocumentDB can be accessed by EC2 instances or other AWS services in different VPCs in the same region or other regions via VPC peering.
The advantage of deploying clusters within a VPC is that VPCs provide a strong network boundary to the Internet. A common way to connect to DocumentDB from your laptop is to create an EC2 instance within the same VPC as your DocumentDB cluster and SSH tunnel through that EC2 instance to your cluster: https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
To minimize costs for local development, start with the smallest EC2 instance size and utilize the start/stop functionality when not using the cluster.
The same can be done with DocumentDB. When you are developing, you can save on instance costs by stopping the cluster when it is no longer needed: https://docs.aws.amazon.com/documentdb/latest/developerguide/db-cluster-stop-start.html
An alternative is to utilize AWS Cloud9: https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-with-cloud9.html. This solution still requires an EC2 instance in the same VPC as your Amazon Document. What is useful about this solution is that Cloud9 provides a mechanisms to automatically shutdown the EC2 instance if it has been idle for 30-minutes, for example, to help save costs.

NiFi - connect to another instance (S2S)

I'm trying to use the SiteToSiteProvenance Reporting Task.
The objective is to send provenance data between two dockerized instances of NiFi, one at port 8080 and another at port 9090.
I've created a input port creatively called "IN" on the destination NiFi and the service configuration on the source NiFi is:
However I'm getting the following error:
Unable to refresh Remote Group's peers due to Unable to communicate with remote NiFi cluster in order to determine which nodes exist in the remote cluster
I've also exposed the port 10000 in the destination docker.
As mentioned in the comments, it appears there was a networking issue between the containers.
It was finally resolved by the asker by not using containers.

An EC2 instance behind a load balancer is not terminating after reboot but load balancer is going out of service

There is single EC2 instance deployed behind a ELB using Cloud Formation and now I am trying to add cron jobs to crontab by updating CF stack, however after updating the stack I rebooted the server but the changes are not reflecting on the server.
It seems like the application on the server is only rebooted not the OS of the server. When I checked status of ELB after reboot, instance state is out of service and instance state on the EC2 tab is showing running.
Note: There is no autoscaling group attached.
Check if your application on EC2 instance is listening on the port mentioned in loadbalancer.

Resources