dial tcp <REMOTE-IP>:6379: connect: connection refused - go

I'm building an application on GKE (Google Kubernetes Engine) and a system using GCE instances of Redis.
When I try to connect from the application pod on GKE to Redis on GCE, I get connection refused.(dial tcp <REMOTE-IP>:6379: connect: connection refused)
The application is written in Go, and the redis library is go-redis(v8).
Why can't I connect?
The source code for the connection part and the part where the error occurs is as follows.
redisServerName = os.Getenv("REDIS_SERVER") // "sample.com:6379"
redisClient = redis.NewClient(&redis.Options{
Addr: redisServerName,
Password: "",
DB: 0,
})
p, err := redisClient.Ping(ctx).Result()
log.Println(p, err)
The hostname is resolved, so it is not a DNS problem, and the redis-cli commands are executable, so it does not seem to be a Firewall problem.
# redis-cli -h <REMOTE_IP> ping
PONG
Postscript
Here is the result of running the command from the Pod with the Go application running
/# redis-cli -h redis.sample.com
redis.sample.com:6379> // can connect
/# nc redis.sample.com 6379
// There is NO response.

I assert that every application in a container will have the same layer 4 (for redis, TCP) access to the network. Since Redis provides no significant access control, this means that if one app on your container has network access to redis server, all other apps on the same container will too. And if one can't contact redis, neither will the other.
On the same container. This is where testing gets tricky, because it isn't helpful or feasible to reproduce your k8s and gke config here.
ICMP Ping and tcp/6379 are different. Just because ping works, doesn't mean Redis can, and vise versa. And different containers will have different network access in k8s and gke.
Do this test on the app container, to take everything possible out of he equation.
apk add redis only pulls in a few packages, only 8MB and provides redis-cli when I tested, but you don't need any client app for redis; it's simple enough to be done with, say, netcat
. You don't have to issue a valid redis cmd, either - if you get an -ERR unknown command response, you know network works:
/ # echo "hi, redis!" |nc localhost 6379
-ERR unknown command `hi,`, with args beginning with: `redis!`,
If it works there and not in Go, it's probably because the environment variable REDIS_SERVER isn't set properly. So you might want to test that at the command line, as well.
nc $REDIS_SERVER 6379

Related

Access public PostgreSQL server (Amazon RDS) from personal computer through proxy

I'm new to Amazon Web Service (AWS).
I already created a PostgreSQL from AWS RDS:
Endpoint: database-1.XXX.rds.amazonaws.com
Port: 5432
Public accessibility: Yes
Availablity zone: ap-northeast-1c
After that, I will push my application that using the database to AWS (maybe deploy to EKS).
However, I want to try testing the database server from my local computer first.
I haven't tried testing from my laptop PC at home yet, but I think it will connect OK because my laptop PC is not using the HTTP proxy to connect to the network.
The problem is that I want to try testing from my company PC, which needs setup the HTTP proxy to connect to the internet. The PC spec:
Windows 10
Installed PostgreSQL 10
Firstly, I tried using psql command-line:
psql -h database-1.XXXX.rds.amazonaws.com -U postgre
> Unknown host
set http_proxy=http://user:password#my_company_proxy:3128
set https_proxy=http://user:password#my_company_proxy:3128
psql -h database-1.XXXX.rds.amazonaws.com -U postgre
> Unknown host
set http_proxy=http://my_second_company_proxy:3128
set https_proxy=http://my_second_company_proxy:3128
psql -h database-1.XXXX.rds.amazonaws.com -U postgre
> Unknown host
Then, I tried using the pgAdmin tool.
As from the internet post, it said that we can use "SSH Tunnel" for inputing proxy:
However, the error message will be shown:
So, anyone can help suggest if we can connect to the public PostgreSQL server through HTTP proxy?
I think problem is Postgres uses plain TCP/IP protocol and you are trying to use HTTP proxy. Also you're trying to create SSH tunnel against your HTTP proxy server which won't work.
So I'd suggest following solutions:
Use TCP proxy instead of HTTP proxy
Create an EC2 or any instance that has SSH access from your company network and has access to public internet. So that you can create SSH tunnel through that instance to achieve your goal.
NOTE: Make sure you PostgreSQL is accessible from public internet (although this is usually bad idea, but it's out of scope this question) sometimes security group configs prevent it to connect from public internet.
Just add all ports(5432,3128...) in the Security Group from your RDS and specify your IP. Don't forget "/32"
Let me add that "unknown host" is usually an indication that you're not resolving the DNS hostname. Also, your HTTP proxy should not interfere with connections to databases since they aren't on port 80 or 443. A couple of things you can try (assuming you're on windows) sub in your actual url:
nslookup database-1.XXXX.rds.amazonaws.com
telnet database-1.XXXX.rds.amazonaws.com 5432
You should also check the security group that is attached to your RDS and make sure you've opened up the ip address that you're originating from on port TCP/5432.
Lastly check that your VPC has DNS and Hostnames enabled. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating

"bind: An attempt was made to access a socket in a way forbidden by its access permissions"

Say we have frontend and backend containers based on Docker Desktop (for Windows).
Backend container uses port 9001, and the frontend container listens to 9001.
The problem is that port 9001 is already in use by Windows 10 by the Intel driver, and it is impossible to run a container on this port:
Error response from daemon: Ports are not available: listen tcp 0.0.0.0:9001: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
Could you please advise what is the way to handle this port if there is no ability to change it directly from application code?
A couple of ways:
When either using a docker run command, specify the host port to use, and set it to something other than 9001. i.e. -p 9002:9001 or Docker Compose, i.e.
ports:
- '9002:9001'
Then use port 9002 instead of 9001 when accessing the container from the host (Win 10).
Use Nginx and set up a reverse proxy, leave the host port empty when starting the container so no external post is opened on the host, and have the reverse proxy pass it over to the container's 9001 port.

Connect to a MariaDB Docker Container in a own Docker network remotly

Hi what I am actually trying is to connect remotly from a MySQL Client in Windows Subsystem for Linux mysql -h 172.18.0.2 -P 3306 -u root -p and before that I started the Docker Container as follows: docker container run --name testdb --network testnetwork -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysqlRootPassword -e MYSQL_DATABASE=localtestdb -d mariadb/server.
The purpose why I put the container in a own network, is because I also have a dockerized Spring Boot Application (GraphQL-Server) which shall communicated with this db. But always when I try to connect from my built-in mysql client, in my Windows Subsystem for Linux, with the above shown command. I got the error message: ERROR 2002 (HY000): Can't connect to MySQL server on '172.18.0.2' (115).
What I already tried, to solve the problem on my own is, look up whether the configuration file line (bind-address) is commented out. But it wont work. Interestingly it already worked to set up a docker container with MariaDB and connect from the outside, but now when I try exactly the same, only with the difference that I now put the container in a own existing network, it wont work.
Hopefully there some one out there which is able to help me with this annonying problem.
Thanks!
So far,
Daniel
//edit:
Now I tried the solution advice from a guy from this topic: How to configure containers in one network to connect to each other (server -> mysql)?. Futhermore I linked my Spring Boot (server) application with the "--link databaseContainerName" parameter to the MariaDB container.
Now I am able to start both containers without any error, but I am still not able to connect remotly to the MariaDB container. Which is now running in a virtual docker network with his own subnet.
I explored this recently - this is by design - container isolation. Usually only main (service httpd) host is accessible externally, hiding internal connections (hosts it communicates to deliver response).
Container created in own network is not accessible from external adresses, even from containers in the same bridge but other network (172.19.0.0/16).
Your container should be accessible on docker host address (127.0.0.1 if run locally) and mapped ("-p 3306:3306") port - 3306. But of course it won't work if many running db containers have the same mapping to the same host port.
Isolation is done using firewall - iptables. You can list rules (iptables -L) to see that - from docker host level.
You can modify firewall to allow external access to internal networks. I used this rule:
iptables -A DOCKER -d 172.16.0.0/12 -j ACCEPT
After that your MySQL containerized engine should be accessible using internal address 172.18.0.2 and source (not mapped) port 3306.
Warnings
it disables all isolation, dont't use it on production;
you have to run this after every docker start - rules created/modified by docker on the fly
not every docker container will respond on ping, check it from docker host (linux subsystem in this case) first, from windows cmd later
I used this option (in docker.service) to make rule permanent:
ExecStartPost=/bin/sh -c '/etc/iptables/accept172_16.sh'
For docker on external(shared in lan) host you should use route add (or hosts file on your machine or router) to forward 172.x.x.x addresses into lan docker host.
Hint: use portainer project (with restart policy - always) to manage docker containers. It's easier to see config errors, too.

Cannot connect to Container-optimized-os (running a spring-boot application using docker) using external ip

I have created a Google compute instance with Container-optimized-OS image.
I have configured the firewall to allow http and https.
I am using the docker image with spring boot application which connects to cloudsql. When I use run command on compute engine instance ssh, i.e. (docker run --rm name), the spring boot app is started successfully.
When I try to access the webservices through compute engine instance external ip, it is not working.
I went through a different question, and found that I should try using the sudo wget http://localhost command on the instance cli first and if it is good then everything should be good. But I am getting a connection refused message on 127.0.0.1:80.
I also tried the command to open port from Container optimized OS, I.E.
sudo iptables -w -A INPUT -p tcp --dport 80 -j ACCEPT , nothing is working.
The default port for Spring Boot is 8080 and not 80.
Run this command inside the instance container to see what ports are in LISTENING state:
sudo netstat -tulpn | grep LISTEN
You can redirect port 80 to port 8080 with this command:
sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
Note: This iptables command only redirects port 80 to 8080 on network interfaces. This has no effect for localhost or 127.0.0.1.
For Google Compute Engine instances you do not need to enable ports using iptables. This is done via Google VPC firewall rules. You can use both but make sure you understand exactly what you are configuring and the side effects.
Note: Your Spring Boot application needs to listen on 0.0.0.0 and not 127.0.0.1 nor localhost. The last two are internal only addresses. 0.0.0.0 means listen on all network interfaces.
Note: Do not use sudo in front of wget. This is not necessary.
First, confirm what port your springboot application uses - if it's 8080 or 80. This depends on what you have configured inside application.properties file. This port is referred to as ContainerPort in below steps.
Execute docker run <image-name>:<tag>. This will run the image and show container logs on the console. If there is something wrong with your spring-boot app, the logs will show that and the container will shutdown. Press Ctrl+C to stop the container and return to shell.
If there is no error in step 1 run docker run -d -p<HostPort>:<ContainerPort> <image-name>:<tag>. Here HostPort is any free port in your GCP host VM and ContainerPort is the port used by your spring boot application within the container. Option d starts your container in detached mode.
Run docker ps and make sure that the container started in step 2 is running. It may not run if there is an error - for example if the HostPort you specified is already in use.
If step 3 shows that the container is running, execute curl http://localhost:<HostPort>/<End-Point-Path>. Here End-Point-Path is a valid path to a working endpoint within the container. If the endpoint is correct you should see expected result from the spring-boot app in the console.
Navigate to Google Cloud Console -> VPC network -> Firewall rules and add a firewall rule to open HostPort on your GCP VM.
Access your endpoint via the VM's external IP with URL - http://<VM-External-IP>:<HostPort>/<End-Point-Path>
Unless there is an application issue with your spring-boot app these steps should get you going.
I was able to build the correct solution by your help (John Hanley and Cyac).
I am combining both solutions in order to help the next person facing this.
As told by John, by default Spring boot uses port 8080, not 80 and as specified by Cyac you need to specify the port as 80 explicitly in application.properties file using
server.port=80
Make sure you expose the port 80 in docker image
On GCP Contaier optimized OS make sure you have allowed traffic for HTTP and HTTPs
Run command:
sudo iptables -w -A INPUT -p tcp --dport 80 -j ACCEPT
Run docker using:
docker run -p 80:80 SPRING_IMAGE.
Where SPRING_IMAGE is the name of the docker image with spring boot build.
Test by using curl http://localhost/ENDPOINT_NAME , e.g. http://localhost/shops/all

How to configure direct http access to EC2 instance?

This is a very basic Amazon EC2 question, but I'm stumped so here goes.
I want to launch an Amazon EC2 instance and allow access to HTTP on ports 80 and 8888
from anywhere. So far I can't even allow the instance to connect to on those ports using
its own IP address (but it will connect to localhost).
I configured the "default" security group for HTTP using the standard HTTP option on the management console (and also SSH).
I launched my instance in the default security group.
I connected to the instance on SSH port 22 twice and in one window launch an HTTP server
on port 80. In the other window I verify that I can connect to HTTP using the "localhost".
However when I try to access HTTP from the instance (or anywhere else) using either the public DNS or the Private IP address I het "connection refused".
What am I doing wrong, please?
Below is a console fragment showing the wget that succeeds and the two that fail run from the instance itself.
--2012-03-07 15:43:31-- http://localhost/
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: /__whiff_directory_listing__ [following]
--2012-03-07 15:43:31-- http://localhost/__whiff_directory_listing__
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: “__whiff_directory_listing__”
[ <=>
] 7,512 --.-K/s in 0.03s
2012-03-07 15:43:31 (263 KB/s) - “__whiff_directory_listing__” saved [7512]
[ec2-user#ip-10-195-205-30 tmp]$ wget http://ec2-50-17-2-174.compute-1.amazonaws.com/
--2012-03-07 15:44:17-- http://ec2-50-17-2-174.compute-1.amazonaws.com/
Resolving ec2-50-17-2-174.compute-1.amazonaws.com... 10.195.205.30
Connecting to ec2-50-17-2-174.compute-1.amazonaws.com|10.195.205.30|:80... failed:
Connection refused.
[ec2-user#ip-10-195-205-30 tmp]$ wget http://10.195.205.30/
--2012-03-07 15:46:08-- http://10.195.205.30/
Connecting to 10.195.205.30:80... failed: Connection refused.
[ec2-user#ip-10-195-205-30 tmp]$
The standard tcp sockets interface requires that you bind to a particular IP address when you send or listen. There are a couple of somewhat special addresses: localhost (which you're probably familiar with) which is 127.0.0.1. There's also a special address, 0.0.0.0 or INADDR_ANY (internet protocol, special shorthand for ANY ADDRESS). It's a way to listen on ANY or more commonly, ALL addresses on the host. This is a way to tell the kernel/stack that you're not interested in a particular IP address.
So, when you're setting up a server that listens to "localhost" you're telling the service that you want to use the special reserved address that can only be reached by users of this host, and while it exists on every host, making a connection to localhost will only ever reach the host you're making the request from.
When you want a service to be reachable everywhere (on a local host, on all interfaces, etc.) you can specify 0.0.0.0.
(0) It's silly but the first thing you need to do is to make sure that your web server is running.
(1) You need to edit your Security Group to let incoming HTTP packets access your website. If your website is listening on port 80, you need to edit the Security Group to open access to port 80 as mentioned above. If your website is listening on some other port, then you need to edit the Security Group to access that other port.
(2) If you are running a Linux instance, the iptables firewall may be running by default. You can check that this firewall is active by running
sudo service iptables status
on the command line. If you get output, then the iptables firewall is running. If you get a message "Firewall not running", that's pretty self-explanatory. In general, the iptables firewall is running by default.
You have two options: knock out the firewall or edit the firewall's configuration to let HTTP traffic through. I opted to knock out the firewall as the simpler option (for me).
sudo service iptables stop
There is no real security risk in shutting down iptables because iptables, if active, merely duplicates the functionality of Amazon's firewall, which is using the Security Group to generate its configuration file. We are assuming here that Amazon AWS doesn't misconfigure its firewalls - a very safe assumption.
(3) Now, you can access the URL from your browser.
(4) The Microsoft Windows Servers also run their personal firewalls by default and you'll need to fix the Windows Server's personal firewall, too.
Correction: by AWS default, AWS does not fire up server firewalls such iptables (Centos) or UAF (Ubuntu) when you are ordering the creation of new EC2 instances - That's why EC2 instances that are in the same VPC can ssh into each other and you can "see" the web server that you fired up from another EC2 instance in the same VPC.
Just make sure that your RESTful API is listening on all interfaces i.e. 0.0.0.0:portID
As you are getting connection refused (packets are being rejected) I bet it is iptables causing the problem. Try to run
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
iptables -I INPUT -p tcp --dport 8888 -j ACCEPT
and test the connection.
You will also need to add those rules permanently which you can do by adding the above lines into ie. /etc/sysconfig/iptables if you are running Red Hat.
Apparently I was "binding to localhost" whereas I needed to bind to 0.0.0.0 to respond to port 80 for the all incoming TCP interfaces (?). This is a subtlety of TCP/IP that I don't fully understand yet, but it fixed the problem.
Had to do the following:
1) Enable HTTP access on the instance config, it wasn't on by default only SSH
2) Tried to do nodejs server, so port was bound to 80 -> 3000 did the following commands to fix that
iptables -F
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
sudo service iptables-persistent flush
Amazon support answered it and it worked instantly:
I replicated the issue on my end on a test Ubuntu instance and was able to solve it. The issue was that in order to run Tomcat on a port below 1024 in Ubuntu/Unix, the service needs root privileges which is generally not recommended as running a process on port 80 with root privileges is an unnecessary security risk.
What we recommend is to use a port redirection via iptables :-
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
I hope the above information helps.

Resources