Application deployed on kubernetes in aws ec2 instances is not accessible without nginx external ip port number - amazon-ec2

I have deployed microservice based applications in EC2 instances kubernetes set-up.
my web application is accessible if I add port no of external IP of ingress-nginx with url. but I want it to be accessible with out port no.
same deployment is working without port no in on-prem setup.
all ports are open in aws security settings.

Related

Run Two Laravel applications with docker on same server pointing to subdomain

How do I run two Laravel Docker apps on the same server using one container per app and point to two domains?
Both apps are on the same AWS ec2 server
eg:
container one points to -> one.mydomain.com
container two points to -> two.mydomain.com
I'm new to this.
Is it even possible?
an apache solution would be preferable.
Yes, it is a possible and also different way to that and will suggest to use AWS services.
Using AWS load balancer and Host-based routing and different port publish for each app
Nginx
With AWS approach you need to run your container using ECS.
Create Load balancer
Create cluster
Create service
Attached service to Load balancer and update load balancer routing to Host-based routing app1.example.com, will route to app1
Repeat the above step for app2.
The above is the standard way to deal with the container using AWS.
You can read more about this gentle-introduction-to-how-aws-ecs-works-with-example-tutorial and Run containerized applications in production
With Nginx, you need to manage everything for your self.
Run both containers on EC2
Install Nginx
Update Nginx configuration to route traffic based on DNS
Update DNS Entry and will point to EC2 instance public IP, both DNS, for example, app1.example.com and app2.example.com will point to same EC2 instance but the Nginx will decide which app will serve the request.
server {
server_name app1.example.com;
location / {
proxy_pass http://127.0.0.1:HOSTPORT;
}
}
server {
server_name app2.example.com;
location / {
proxy_pass http://127.0.0.1:HOSTPORT;
}
}
I will recommend these two approaches, Nginx over apache but if you are interested you can check this apache-vhosts

How to terminate HTTPS traffic directly on Kubernetes container

I have so far configured servers inside Kubernetes containers that used HTTP or terminated HTTPS at the ingress controller. Is it possible to terminate HTTPS (or more generally TLS) traffic from outside the cluster directly on the container, and how would the configuration look in that case?
This is for an on-premises Kubernetes cluster that was set up with kubeadm (with Flannel as CNI plugin). If the Kubernetes Service would be configured with externalIPs 1.2.3.4 (where my-service.my-domain resolves to 1.2.3.4) for service access from outside the cluster at https://my-service.my-domain, say, how could the web service running inside the container bind to address 1.2.3.4 and how could the client verify a server certificate for 1.2.3.4 when the container's IP address is (FWIK) some local IP address instead? I currently don't see how this could be accomplished.
UPDATE My current understanding is that when using an Ingress HTTPS traffic would be terminated at the ingress controller (i.e. at the "edge" of the cluster) and further communication inside the cluster towards the backing container would be unencrypted. What I want is encrypted communication all the way to the container (both outside and inside the cluster).
I guess, Istio envoy proxies is what you need, with the main purpose to authenticate, authorize and encrypt service-to-service communication.
So, you need a mesh with mTLS authentication, also known as service-to-service authentication.
Visually, Service A is your Ingress service and Service B is a service for HTTP container
So, you terminate external TLS traffic on the ingress controller and it will go further inside the cluster with Istio mTLS encryption.
It's not exactly what you asked for -
terminate HTTPS traffic directly on Kubernetes container
Though it fulfill the requirement-
What I want is encrypted communication all the way to the container

how to set pm2 node.js app on windows server?

i have this configuration:
aws EC2 windows server with node.js and pm2 installed on it.
Route 53 domain.
elastic IP for the server.
running app on the server on port 80 using pm2.
my app using express web app framework.
my questions is how to connect the domain name (on route 53) with the pm2 app on the server.
thanks.
Create one record set pointing to EIP of your server in route53 and use.
If you have multiple servers:
Create one ELB
Attach instances to it.
Add CNAME entry from ELB to route53 record set.

Port open for memcache in aws ec2 instance

Hi I'm running an aws ec2 instance with Drupal 6.
I plan on installing memcached on this server. One requirement is to open up port 11211 which is default port for memcached
I want to know in aws ec2 instance how to open incoming and outgoing traffic for port 11211? Do I need to open this port for incoming and outgoing traffic?
Secondly how do I secure the aws setup so only my ec2 can access 11211 port?
Thanks!
Is your Ec2 within a VPC ? or is it classic EC2 ?
You need to make open ports on security groups and Network acl's.
If you are new to AWS , you should first understand the NACL and Security group's and setup security in your environment.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html

Google Cloud Network Load Balancer - Health checks always unhealthy

I tried to set up a network load balancer on google cloud but the heath check always returns unhealthy.
I give you the steps that i did follow
I created two windows servers 2012 R2 instances
I checked that the port 80 is open to public over both instances
I created the forwarding rules and Google Cloud gave me a External IP
I set up the external IP in a Network loopback interface on both server instances
I created a Network Route that forwarding the traffic on both instances (route menu)
I created another Network Route for the 169.254.169.254/32 (Source of Network load balancer traffic) and Pointing to both windows instances server
I created the same site (example.com) on IIS 8 in both instances server and the site is running correctly.
The DNS settings of the domain example.com is pointing to the external IP google cloud that I using for Network load balancer
I configured the health check
PATH : /
Protocol : HTTP
HOST: example.com
Session Afinity : Client IP
I created a Target Pool and I added both server instances and heath check
I Asigned the target pool to forwarding rule
When I select the Target Pool option, both instances marked as Unhealthy for the external IP that Google cloud gave me and I don't know why this happens.
I see the web page is switching the server instances randomly all the time.
Your Help is apreciated!, thank you!
You don't need to add any GCE Network Route.
The GCE agent is taking care of adding the load balancer IP to the VM's network configuration. There is no need to do it manually. https://github.com/GoogleCloudPlatform/compute-image-windows
IIS must respond to requests on the LB IP:
Check the IIS bindings from IIS manager. Reset IIS.
Confirm from netstat that IIS is listening on 0.0.0.0 or the load balanced IP.
Access the LB IP from one of the servers. It should work.
The GCE firewall must allow traffic from the clients' IPs and also from the metadata server (169.254.169.254). The metadata server is used for healthchecks.
Network Load Balancing tutorial. https://cloud.google.com/compute/docs/load-balancing/network/example

Resources