How to send request from a service running on EKS cluster as a client to a differnt service running outside the network - client-server

Senario: I have two services lets say A and B.
At present service A is running on ec2 instances with an elb infront of it and makes call to service B. At service B side, we have whitelisted IP of service A to accept request only from the Whitelisted IP's.
Now we migrated service A from ec2 instances to EKS, I am a bit new to EKS concept, So I would like to know how we can how we can allow service A to send request to service B.

You can setup NAT with Elastic IP address and route your cluster egress thru this NAT. You can then whitelist the NAT public EIP which doesn't change.

Related

Services communication in consul

I am developing several services, and use consul as the service registry. I'm able to register all of my services to the consul.
And now for the next thing to do, I need to be able to communicate from service A to service B.
Without a service registry, usually what I did was simply dispatch a client HTTP request from service A to service B.
But since now I already have service discovery in place, should I get the service B host address via consul and then dispatch a client HTTP request to the service B host address something like that? Or does the consul also provide an API gateway, so I only need to dispatch my client HTTP request from service A to the consul, and then the consul will automatically forward it to the destination?
Also if there is relevant documentation about my case, I would be very glad to take a look at it? (I can't find the relevant documentation, probably my google search keyword is wrong)
Consul supports two methods for service discovery, DNS and HTTP.
Applications can perform DNS lookups against their local Consul agent which exposes a DNS server on port 8600 (you can also configure DNS forwarding). For example, an application can issue an A record query for web.service.consul and Consul will return a list of healthy instance endpoints for the web service. SRV lookups are also supported in order to retrieve the IP and port for a given service. The DNS interface also supports querying endpoints by service tag and data center. Details can be found at Consul.io: DNS - Service Lookups.
HTTP-based service discovery can be performed by querying the /v1/health/service/:name endpoint against the local agent. The following will return a full list of healthy and unhealthy endpoints for the service nginx.
$ curl http://127.0.0.1:8500/v1/health/service/nginx
You can use the passing query parameter to restrict the output to only healthy services.
$ curl "http://127.0.0.1:8500/v1/health/service/nginx?passing"
I recommend reviewing the guide Register a Service with Consul Service Discovery for more info on registering and querying services from the catalog.
Lastly, API gateways like Traefik and Solo's Gloo support using Consul for service discovery (see Traefik's Consul Catalog Provider and Gloo's Consul Services). You could configure your services to route requests to these gateways, and allow the gateway to forward to the backend destination.
I ended up getting the list of services info from the consul, and then perform name matching on it then get the service address.
I use this endpoint to get the list of the services and it's data:
http://localhost:8500/v1/agent/services
So it's the client-side discovery I guess.

How to terminate HTTPS traffic directly on Kubernetes container

I have so far configured servers inside Kubernetes containers that used HTTP or terminated HTTPS at the ingress controller. Is it possible to terminate HTTPS (or more generally TLS) traffic from outside the cluster directly on the container, and how would the configuration look in that case?
This is for an on-premises Kubernetes cluster that was set up with kubeadm (with Flannel as CNI plugin). If the Kubernetes Service would be configured with externalIPs 1.2.3.4 (where my-service.my-domain resolves to 1.2.3.4) for service access from outside the cluster at https://my-service.my-domain, say, how could the web service running inside the container bind to address 1.2.3.4 and how could the client verify a server certificate for 1.2.3.4 when the container's IP address is (FWIK) some local IP address instead? I currently don't see how this could be accomplished.
UPDATE My current understanding is that when using an Ingress HTTPS traffic would be terminated at the ingress controller (i.e. at the "edge" of the cluster) and further communication inside the cluster towards the backing container would be unencrypted. What I want is encrypted communication all the way to the container (both outside and inside the cluster).
I guess, Istio envoy proxies is what you need, with the main purpose to authenticate, authorize and encrypt service-to-service communication.
So, you need a mesh with mTLS authentication, also known as service-to-service authentication.
Visually, Service A is your Ingress service and Service B is a service for HTTP container
So, you terminate external TLS traffic on the ingress controller and it will go further inside the cluster with Istio mTLS encryption.
It's not exactly what you asked for -
terminate HTTPS traffic directly on Kubernetes container
Though it fulfill the requirement-
What I want is encrypted communication all the way to the container

Curl into API cluster secured by open VPN from another cluster's pod's container

I have created 2 kubernetes clusters on AWS within a VPC.
1) Cluster dedicated to micro services (MI)
2) Cluster dedicated to Consul/Vault (Vault)
So basically both of those clusters can be reached through distinct classic public load balancers which expose k8s APIs
MI: https://api.k8s.domain.com
Vault: https://api.vault.domain.com
I also set up openvpn on both clusters, so you need to be logged in vpn to "curl" or "kubectl" into the clusters.
To do that I just added a new rule in the ELBs's security groups with the VPN's IP on port 443:
HTTPS 443 VPN's IP/32
At this point all works correctly, which means I'm able to successfully "kubectl" in both clusters.
Next thing I need to do, is to be able to do a curl from Vault's cluster within pod's container within into the MI cluster. Basically:
Vault Cluster --------> curl https://api.k8s.domain.com --header "Authorization: Bearer $TOKEN"--------> MI cluster
The problem is that at the moment clusters only allow traffic from VPN's IP.
To solve that, I've added new rules in the security group of MI cluster's load balancer.
Those new rules allow traffic from each vault's node private and master instances IPs.
But for some reason it does not work!
Please note that before adding restrictions in the ELB's security group I've made sure the communication works with both clusters allowing all traffic (0.0.0.0/0)
So the question is when I execute a command curl in pod's container into another cluster api within the same VPC, what is the IP of the container to add to the security group ?
NAT gateway's EIP for the Vault VPC had to be added to the ELB's security group to allow traffic.

ec2 cli api not usable within vpc?

I have some instances with an EC2 VPC (using only ip addresses from RFC 1918) that need to use some services of EC2 via CLI interface (ec2-describe-instances, ec2-run-instances, etc)
I can't get it to work : my understanding is that the service point of the CLI interface is located somewhere in AWS cloud and my requests originating from an RFC1918 address are not routable in the AWS cloud between EC2 service point and my instance.
Is that correct ?
Is my only solution to install a NAT instance within my VPC (I would like to avoid it) ? Or could I get a way to remap this Ec2 service point within my VPC on a RFC1918 address
Any help welcome !
Thanks in advance
didier
You can give the instance an elastic IP address and get outbound access to other publicIPs, like the EC2 API endpoint. Make sure your security group doesn't allow any inbound traffic from the Internet.
Alternatively, if you don't want to use an EIP, you can launch an instance in a VPC with a publicIP address. more here: http://aws.typepad.com/aws/2013/08/additional-ip-address-flexibility-in-the-virtual-private-cloud.html

Why am I unable to associate an Elastic IP to an EC2 instance in a second VPC on AWS?

I have for a long time a VPC (with 1 subnet) on Amazon Web Services (AWS) with several instances each having an Elastic IP address.
For new needs, I have defined a second VPC (with 1 subnet also) on my same account: for some reasons, I can't associate EIP (which is allocated with no problem) to instances launched in VPC #2: the interactive wizard of the console only presents me the instances of the first VPC.
Is it a known limitation or am I doing something wrong?
Two questions:
How many EIP's do you have on your account?
Is the 2nd VPC using a NAT instance to access the Internet?
EIP addresses should only be used on instances in subnets configured to route their traffic directly to the Internet Gateway. EIPs cannot be used on instances in subnets configured to use a NAT instance to access the Internet. (aws.amazon.com)

Resources