I recently setup an ec2 Instance (RHEL7.2) on Amazon Web Services. I am having trouble getting it to respond to http requests. I run docker run -p 12345:80 nginx:latest successfully and I have added port 12345 to the inbound rules in the instance's security group. But this hasn't worked so far - I've tried both adding it to the default security group, and creating a new security group. And in local curl http://localhost:12345/ works right.
But the strange thing is if I change -p 12345:80 to -p 80:80, it works well!!! Then I have tried other ports mapping, but the result is only 80 works!
Has anyone else had a similar experience? Or give me some help and advice, thanks in advance!
PS:
My containers:
Image nginx 0:0:0:0:12345->80/tcp
Image nginx 0:0:0:0:80->80/tcp
Security group:
All TCP All TCP 1~65535 0:0:0:0
EC2:
Public IPv4: xx.xx.zz.zz
I can access via http://xx.xx.zz.zz/ in browser and ssh via port 22. But http://xx.xx.zz.zz:12345/ failed.
Related
I would like to make my nextjs app run on an EC2 instance (ubuntu 18.04).
However, when I sudo npm start, it's then running on localhost, and I don't know how to make it accessible from outside. I tried to access the IP given by AWS and to add the port (:3000), without success.
Would you have an idea of how I must setup this app to be accessible ? What did I miss ?
Finally found out
Localhost is not the problem, it was that the ports are not open by default on EC2 instances.
By adding a Security Group and opening the ports I needed (http for 80, https 443, ssh with 22, and 3000 for the test) it worked out.
I have created a Google compute instance with Container-optimized-OS image.
I have configured the firewall to allow http and https.
I am using the docker image with spring boot application which connects to cloudsql. When I use run command on compute engine instance ssh, i.e. (docker run --rm name), the spring boot app is started successfully.
When I try to access the webservices through compute engine instance external ip, it is not working.
I went through a different question, and found that I should try using the sudo wget http://localhost command on the instance cli first and if it is good then everything should be good. But I am getting a connection refused message on 127.0.0.1:80.
I also tried the command to open port from Container optimized OS, I.E.
sudo iptables -w -A INPUT -p tcp --dport 80 -j ACCEPT , nothing is working.
The default port for Spring Boot is 8080 and not 80.
Run this command inside the instance container to see what ports are in LISTENING state:
sudo netstat -tulpn | grep LISTEN
You can redirect port 80 to port 8080 with this command:
sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
Note: This iptables command only redirects port 80 to 8080 on network interfaces. This has no effect for localhost or 127.0.0.1.
For Google Compute Engine instances you do not need to enable ports using iptables. This is done via Google VPC firewall rules. You can use both but make sure you understand exactly what you are configuring and the side effects.
Note: Your Spring Boot application needs to listen on 0.0.0.0 and not 127.0.0.1 nor localhost. The last two are internal only addresses. 0.0.0.0 means listen on all network interfaces.
Note: Do not use sudo in front of wget. This is not necessary.
First, confirm what port your springboot application uses - if it's 8080 or 80. This depends on what you have configured inside application.properties file. This port is referred to as ContainerPort in below steps.
Execute docker run <image-name>:<tag>. This will run the image and show container logs on the console. If there is something wrong with your spring-boot app, the logs will show that and the container will shutdown. Press Ctrl+C to stop the container and return to shell.
If there is no error in step 1 run docker run -d -p<HostPort>:<ContainerPort> <image-name>:<tag>. Here HostPort is any free port in your GCP host VM and ContainerPort is the port used by your spring boot application within the container. Option d starts your container in detached mode.
Run docker ps and make sure that the container started in step 2 is running. It may not run if there is an error - for example if the HostPort you specified is already in use.
If step 3 shows that the container is running, execute curl http://localhost:<HostPort>/<End-Point-Path>. Here End-Point-Path is a valid path to a working endpoint within the container. If the endpoint is correct you should see expected result from the spring-boot app in the console.
Navigate to Google Cloud Console -> VPC network -> Firewall rules and add a firewall rule to open HostPort on your GCP VM.
Access your endpoint via the VM's external IP with URL - http://<VM-External-IP>:<HostPort>/<End-Point-Path>
Unless there is an application issue with your spring-boot app these steps should get you going.
I was able to build the correct solution by your help (John Hanley and Cyac).
I am combining both solutions in order to help the next person facing this.
As told by John, by default Spring boot uses port 8080, not 80 and as specified by Cyac you need to specify the port as 80 explicitly in application.properties file using
server.port=80
Make sure you expose the port 80 in docker image
On GCP Contaier optimized OS make sure you have allowed traffic for HTTP and HTTPs
Run command:
sudo iptables -w -A INPUT -p tcp --dport 80 -j ACCEPT
Run docker using:
docker run -p 80:80 SPRING_IMAGE.
Where SPRING_IMAGE is the name of the docker image with spring boot build.
Test by using curl http://localhost/ENDPOINT_NAME , e.g. http://localhost/shops/all
I tried to run this hello world app on an AWS EC2 instance with docker-compose up --build . It works as expected and is accessible remotely from the EC2 public IP when I use port 80 i.e., "80:80" as shown in the docker-compose file.
However, if I change to another port such as "5106:80", it is not accessible from a remote host using <public IPv4 address>:5106 even though it's available locally if I ssh unto the EC2 instance and try localhost:5106. Please note:
I've ensured the EC2 is in a public subnet and I have configured the security group to make the port (in this case, 5106) accept inbound traffic from my laptop.
I know it's not a problem with the hello-world app because I experience exactly the same problem with another app i.e., only port 80 works with docker-compose port mapping on EC2.
As it works with port 80 and doesn't work with port 5106 it could mean one of two possibilities:
There is an issue with your security groups. You should check you have added port 5106 in your inbound rules of your security group.
There is an issue with a firewall or antivirus that doesn't allow you to connect to web pages in different ports rather than 80 or 443. You may try if this happens with another device or on another network.
In this case, it seemed to be the latter.
Possible that the docker network needs to be deleted?
docker network rm $(docker network ls -q)
Then run docker-compose up again.
I have a server(AWS) to which I have ssh access.
There is a service(supervisor) running on this service on port 9001 whose web view can be accessed through 127.0.0.1:9001 had it been a local machine.
But since it is not a local machine, how do I access it?
I got the ip address of the machine using ifconfig | grep inet and then tried accessing it through https://172.11.11.1:9001/
Bit dint work.
When I tried wget https://172.11.11.1:9001/ it shows
Connecting to 172.31.19.8:9001... and hangs there.
I have added the following line in my supervisor conf file.
[inet_http_server]
port = *:9001
Can someone please help me with this?
This is more of a server config question. You'll most likely find your AWS access properties allow connections on post 22, 80 and 443 only. In AWS console you'll need to add a new security access group to allow port 9001 to be accessed.
I'm trying to set up two instances under an elastic load balancer, but cannot figure out how I'm supposed to access the instances through the load balancer.
I've set up the instances with a security group to allow access from anywhere to certain ports. I can access the instances directly using their "Public DNS" (publicdns) host name and the port PORT:
http://[publicdns]:PORT/
The load balancer contains the two instances and they are both "In Service" and it's forwarding the port (PORT) onto the same port on the instances.
However, if I request
http://[dnsname]:PORT (where dnsname is the A Record listed for the ELB)
it doesn't connect to the instance (connection times out).
Is this not the correct way to use the load balancer, or do I need to do anything to allow access to the load balancer? The only mention of security groups in relation to the load balancer is to restrict access to the instances to the load balancer only, but I don't want that. I want to be able to access them individually as well.
I'm sure there's something simple and silly that I've forgotten, not realised or done wrong :P
Cheers,
Svend.
Extra info added:
The Port Configuration for the Load Balancer looks like this (actually 3 ports):
10060 (HTTP) forwarding to 10060 (HTTP)
Stickiness: Disabled(edit)
10061 (HTTP) forwarding to 10061 (HTTP)
Stickiness: Disabled(edit)
10062 (HTTP) forwarding to 10062 (HTTP)
Stickiness: Disabled(edit)
And it's using the standard/default elb security group (amazon-elb-sg).
The instances have two security groups. One external looking like this:
22 (SSH) 0.0.0.0/0
10060 - 10061 0.0.0.0/0
10062 0.0.0.0/0
and one internal, allowing anything within the internal group to communicate on all ports:
0 - 65535 sg-xxxxxxxx (security group ID)
Not sure it makes any difference, but the instances are m1.small types of image ami-31814f58.
Something that might have relevance:
My health check used to be HTTP:PORT/ but the load balancer kept saying that the instances were "Out of Service", even though I seem to get a 200 response on the request on that port.
I then changed it to TCP:PORT and it then changed to say they were "In Service".
Is there something very specific that should be returned for the HTTP one, or is it simply a HTTP 200 response that's required? ... and does the fact that it wasn't working hint towards why the load balancing itself wasn't working either?
It sounds like you have everything set up correctly. Are they the same ports going into the loadbalancer as the instance? Or are you forwarding the request to another port?
As a side note, when I configure my loadbalancers I don't generally like to open up my instances on any port for the general public. I only allow the loadbalancer to make requests to those instances. I've noticed in the past that many people will make malicious requests to the IP of the instance trying to find a security breach. I've even seen people trying to brute force login into my windows machines....
To create a security rule only for the loadbalancers run the following commands and remove any other rules you have in the security-group for the port the loadbalancer is using. If you're not using the commandline to run these commands then just let me know which interface you're trying to use and i can try to come up with a sample that will work for you.
elb-create-lb-listeners <load-balancer> --listener "protocol=http, lb-port=<port>, instance-port=<port>"
ec2-authorize <security-group> -o amazon-elb-sg -u amazon-elb
Back to your question. Like I said, the steps you explained are correct, opening the port on the instance and forwarding the port to the instance should be enough. Maybe you need to post the full configuration of your instance's security group and the loadbalancer so that I can see if there is something else affecting your situation.
I went ahead and created a script that will reproduce the same exact steps that i'm using. This assumes you're using linux as an operating system and that the AWS CLI tools are already installed. If you don't have this setup already I recommend starting a new Amazon Linux micro instance and running the script from there since they have everything already installed.
Download the X.509 certificate files from amazon https://aws-portal.amazon.com/gp/aws/securityCredentials
Copy the certificate files to the machine where you will run the commands
Save two variables that are required in the script
aws_account=<aws account id>
keypair="<key pair name>"
Export the certificates as environmental variables
export EC2_PRIVATE_KEY=<private_Key_file>
export EC2_CERT=<cert_file>
export EC2_URL=https://ec2.us-east-1.amazonaws.com
Create the security groups
ec2-create-group loadbalancer-sg -d "Loadbalancer Test group"
ec2-authorize loadbalancer-sg -o loadbalancer-sg -u $aws_account
ec2-authorize loadbalancer-sg -p 80 -s 0.0.0.0/0
Create the user-data-file for the instance so that apache is started and the index.html file is created
mkdir -p ~/temp/
echo '#! /bin/sh
yum -qy install httpd
touch /var/www/html/index.html
/etc/init.d/httpd start' > ~/temp/user-data.sh
Start the new instance and save the instanceid
instanceid=`ec2-run-instances ami-31814f58 -k "$keypair" -t t1.micro -g loadbalancer-sg -g default -z us-east-1a -f ~/temp/user-data.sh | grep INSTANCE | awk '{ print $2 }'`
Create the loadbalancer and attach the instance
elb-create-lb test-lb --availability-zones us-east-1a --listener "protocol=http, lb-port=80, instance-port=80"
elb-register-instances-with-lb test-lb --instances $instanceid
Wait until your instance state in the loabalancer is "InService" and try to access the urls