GCP mongodb external ip connection issue - spring

I have a spring MVC application and I am connecting it to MongoDB cluster
This is in the application.properties file
mongodb.url=mongodb://userName:Password#xx.xx.x.xx:27017,xx.xx.x.xx:27017,xx.xx.x.xx:27017/?authSource=admin
The cluster is deployed on GCP with one primary and 2 secondary servers.
However, after deployment when I hit the API to get the data I get an error
{java.net.UnknownHostException: mongodb-3-arbiters-vm-0}}, {address=mongodb-3-servers-vm-1:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongodb-3-servers-vm-1}, caused by {java.net.UnknownHostException: mongodb-3-servers-vm-1}}
The external IPs are getting mapped to the server name on the GCP dashboard. xx.xx.xx.xx:27017 to mongodb-3-servers-vm-1:27017, hence resulting in unknown host exception. what to do to avoid that ?

When connecting to a replica set, the hostnames, IP addresses and port numbers provided in the connection string are the seedlist.
The driver will connect to the hosts in the seedlist in order to get an initial connection. It uses this connection to perform server discovery. It queries the server that is connected first for the host names, port numbers, and status of the other members of the replica set. The server obtains this information from the replica set configuration document.
This means that the hostnames and port number you used when running rs.initiate or rs.add must be resolvable by both the replica set members and each client host that will be connecting.
There is a feature that supports passing remote clients a different host name, similar to split-horizon DNS, but outside of the git repository, I don't see any mention of it.

Related

Can't connect to cockroachdb on gcp

I deployed a CockroachDB cluster on a 4 gcp instances in a secure mode and configured a TCP proxy load balancer to distribute the traffic, but when I try to connect to it through the load balancer sometimes I get connected but most of the times I get a connection timeout with this error message in the instances cockroachdb logs:
‹http: TLS handshake error from 130.211.1.145:50475: tls: first record does not look like a TLS handshake›
The 130.211.1.145 address in the error message is the gcp LoadBalancer's IP address.
Any thoughts?

GET/POST Request from Jelastic environment to another: connection refused

I'm trying to make GET/POST requests from one Jelastic environment to another. Both are running Node/Express. I tried with env name (which works from my local machine) and internal IP address, but I always get a connection refused error:
FetchError: request to https://10.101.19.55/converter failed,
reason: connect ECONNREFUSED 10.101.19.55:443
Any hint on how to solve this is greatly appreciated.
You don't mention how your application is deployed (on what node type), but most likely it isn't listening on port 443; that only works remotely (if it does) because of proxying by the Shared Load Balancer of the Jelastic platform.
If you want to connect internally (between Jelastic nodes within the same platform), your requests do not pass via the Shared Load Balancer so cannot take advantage of that proxying.
In other words, you need to:
connect directly to the port you application actually runs on, such as https://10.101.19.55:5000/converter (if your application is running/listening on port 5000)
ensure that your firewall rules permit access from where you're connecting from

Requiring public IP address for kafka running on EC2

We have kafka and zookeeper installed on a single AWS EC2 instance. We have kafka producers and consumers running on separate ec2 instances which are on the same VPC and have the same security group as that of kafka instance. In the producer or consumer config we are using the internal IP address of the kafka server to connect to it.
But we have noticed that we need to mention the public IP address of the EC2 server as advertised.listeners for letting the producers and consumers connect to the Kafka server:
advertised.listeners=PLAINTEXT://PUBLIC_IP:9092
Also we have to whitelist the public ip addresses and open traffic on 9092 port of each of our ec2 servers running producers and consumers.
We want the traffic to flow using internal IP addresses. Is there a way we need not whitelist the public ip addresses and open traffic on 9092 port for each one of our servers running producer or consumer?
If you don't want to open access to all for either one of your servers, I would recommend adding a proper high performance web server like nginx or Apache HTTPD in front of your applications' servers acting as a reverse proxy. This way you could also add SSL encryption and your server stays on a private network while only the web server would be exposed. It’s very easy and you can find many tutorials on how to set it up. Like this one: http://webapp.org.ua/sysadmin/setting-up-nginx-ssl-reverse-proxy-for-tomcat/
Because of the variable nature of the ecosystem that kafka may need to work in, it only makes sense that you are explicit in declaring the locations which kafka can use. The only way to guarantee that external parts of any system can be reached via an ip address is to ensure that you are using external ip addresses.

Is there possible to config map-reduce use hostname rather than IP during data transferring?

Currently I am trying to migrate hdfs files between two different hadoop clusters by using distcp
The source cluster is isolated in a network, each machine has been associated with both external & internal IPs. Namenode talks to datanode through internal IP address
In the destination side, it's always failed to fetch data when using distcp, for it always try to connect to source datanode by using internal IPs of the source side which is invariably inaccessible.
org.apache.hadoop.hdfs.BlockReaderFactory: I/O error constructing remote block reader.
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.47.194.252:50010]
Is there possible to change IP to hostname in the case ? Then I could mapping the source hostname with external IP in the destination side.

Kafka Server Properties - unable to connect to broker

Lets say Kafka is running as a single node broker on an AWS EC2 instance. The instance has the internal private IP 10.0.0.1. I want to connect to that broker directly from the same EC2 instance and from another EC2 instance in the same VPC and subnet. The security groups are allowing the connection.
Which settings do I have to use to get the connection running?
I tried listeners=PLAINTEXT://0.0.0.0:9092 and advertised.listeners=PLAINTEXT://0.0.0.0:9092. With that setting I can connect to the broker from local (the same instance where the broker is running), but I can't reach the broker from the second EC2 instance.
Does anybody have any idea?
If you are trying to connect to the Kafka instance inside of AWS from one EC2 instance to another the internal ip address should work.
The producer and consumers should make use of the internal private ip addresses as well for both the broker and zookeeper.
Additionally, you may need to verify that the IP Tables at the OS level aren't blocking the communication.

Resources