How to create a load balancer service on openshift with https (IBM CLOUD) - https

I have an Openshift cluster (on IBMCLOUD) and in order to secure my DNS record,i follow up the next steps :
1- Secure openshift route https
2- Declare 443 port on Loadbalancer service
3- Declare my Loadbalancer as CNAME on my DNS record (-Azure- as extrenal DNS records)
But after all those steps my loadbalancer didn't listen to the https port.
so How to create a load balancer service on openshift with https

Related

Get DOMAIN for TCP Kubernetes Server behind NGINX Ingress

I have a server that uses a custom TCP protocol. It is accessed through a subdomain such as sub1.mydomain.com and the server needs to know what subdomain is being used to access it.
Given the following infrastructure, how could you determine the domain and subdomain name from inside the running Kubernetes Pod?
[AWS Classic Load Balancer] ->
[Kubernetes Nginx Ingress Controller] ->
[Kubernetes Service] ->
[Kubernetes Deployment]

How to limit access to ElasticBeanstalk port 80 from internal zone only?

I have ElasticBeanstalk environment which should be exposed to the Internet via HTTPS port but also exposed via HTTP only to some instances inside my cloud. It therefore has 2 listeners. EB auto-sets a "HTTP ANY IP" inbound rule for the LoadBalancer security group of my env.
Now, I have defined a Route 53 alias to my EB environment, e.g. "myenv.company.internal". Next, I curl "http://env1.company.internal" from some EC2 instance and it works only if the inbound rules are "HTTP ANY IP". If I try to limit HTTP only to the security group of my EC2 instance, that instance cannot curl.
How do I limit HTTP port 80 access of my EB environment only to some other security group in my cloud?
How do I limit HTTP port 80 access of my EB environment only to some other security group in my cloud?
You can't do this for internet facing ALB. If you setup env1.company.internal private hosted zone record for public ALB, it will just resolve to public IP addresses of the ALB.
Therefore, you can't use SGs in ALB SG ingress rules to limit traffic. That's why it works with HTTP ANY IP, but not with reference SGs.
If you want to overcome this issue, you can attach an Elastic IP to your other instance, and limit port 80 on ALB to only allow connections from the Elastic IP address. For more instances, you can use NAT gateway's public IP address.

Google Cloud Load Balancer

I created a java app and I deployed into a Google Cloud Compute Engine, then I created a Load Balancer, but when I try to access to Load Balancer Frontend IP with port 443 it redirect to port 80
You can create forwarding rules that reference an IP address and port(s) on which Load balancer accepts traffic. Here are the conceps for forwarding rules 1.
IP address specification 2
To add a forwarding rule, please follow the steps here 3

HTTPS Load Balancer to expose a Workload on Google Kubernetes

I created a custom HTTPS LoadBalancer (details) and I need my Kubernetes Workload to be exposed with this LoadBalancer. For now, if I send a request to this endpoint I get the error 502.
When I choose the Expose option in the Workload Console page, there are only TCP and UDP service types available, and a TCP LoadBalancer is created automatically.
How do I expose a Kubernetes Workload with an existing LoadBalancer? Or maybe I don't even need to do it, and requests don't work because my instances are "unhealthy"? (healthcheck)
You need to create a kubernetes ingress.
First, you need to expose the deployment from k8s, for a https choose 443 port and service type can be either: LoadBalance(external ip) or ClusterIp. (you can also test that by accesing the ip or by port forwarding).
Then you need to create the ingress.
Inside yaml file when choosing the backend, set the port and ServiceName that was configured when exposing the deployment.
For example:
- path: /some-route
backend:
serviceName: your-service-name
servicePort: 443
On gcp, when ingress is created, there will be a load balancer created for that. The backends and instance groups will be automatically build too.
Then if you want to use the already created load balancer you just need to select the backend services from the lb that was created by ingress and add them there.
Also the load balancer will work only if the health checks pass. You need to use the route that will return a 200 HTTPS response for that.

Google container engine health checks from service load balancer

I have a Network(Service) load balancer on GCP and a kubernetes cluster with a pod that serves traffic on 80,443. How can I create a successful health check so the load balancer marks the nodes healthy? I tried creating health check for port 80, 443 and also opened port 10250 for kubelet and tried health checking http 10250 as suggested in
Is the Google Container Engine Kubernetes Service LoadBalancer sending traffic to unresponsive hosts?

Resources