Configuring Ingress with a GraphQL Gateway + gRPC Microservices - go

I am looking to deploy a SaaS platform I have built on kubernetes but I have hit a barrier when it comes to setting up the deployment correctly. You can find the setup I am going for
here. I have tried setting up the ingress controller, and it is, in fact forwarding requests to the GraphQL gateway, the problem is that the GraphQL Gateway itself is uncapable of connecting to the other services due to gRPC connection errors. Below you will find my Deployment.yml which contains the graphql gateway, an ingress controller and one service called general
What I am asking basically is how can I connect my gRPC based services to the main GraphQL gateway
apiVersion: v1
kind: Service
metadata:
name: graphql-gateway
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: gateway
---
apiVersion: v1
kind: Service
metadata:
name: general-service
spec:
ports:
- protocol: TCP
port: 443
targetPort: 600
selector:
app: general
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway
spec:
replicas: 2
selector:
matchLabels:
app: gateway
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: myregistry/gateway:latest
imagePullPolicy: "Always"
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: general
spec:
replicas: 2
selector:
matchLabels:
app: general
template:
metadata:
labels:
app: general
spec:
containers:
- name: general
image: myregistry/general:latest
imagePullPolicy: "Always"
ports:
- containerPort: 600
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/grpc-backend: "true"
spec:
rules:
- host: "playground.mydomain.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: graphql-gateway
port:
number: 80

Related

How to create the deployment, statefulset of service-registry(eureka-server) in kubernetes?

I have been trying to create a statefulset of service-registry (eureka-server) in a springboot application. The reason i am doing this because i want to attach pre-defined name to the service-registry pod so that its able to communicate with all the eureka clients even after it restarts. Even though i have been able to create the services (headless and nodeport) with the configuration, but it doesn't create the pod/deployment and the PersistentVolumeClaim itself. Please check the below deployment yaml and suggest the changes.
# Define a 'Persistent Volume Claim'(PVC) for Storage, dynamically provisioned by cluster
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: rtb
name: service-registry-pv-claim # name of PVC essential for identifying the storage data
labels:
app: eureka
spec:
accessModes:
- ReadWriteOnce #This specifies the mode of the claim that we are trying to create.
resources:
requests:
storage: 1Gi #This will tell kubernetes about the amount of space we are trying to claim.
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: rtb
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: rtb
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
volumeMounts: # Mounting volume obtained from Persistent Volume Claim
- name: service-registry-persistent-storage
mountPath: /var/lib/eureka #This is the path in the container on which the mounting will take place.
volumes:
- name: service-registry-persistent-storage # Obtaining 'volume' from PVC
persistentVolumeClaim:
claimName: service-registry-pv-claim
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
and below is the application.yml file
server:
port: 8761
eureka:
instance:
hostname: "${HOSTNAME}.eureka"
client:
register-with-eureka: false
fetch-registry: false
serviceUrl:
defaultZone: ${EUREKA_SERVER_ADDRESS}
this is how the eureka client apps are referring to eureka server
eureka:
instance:
preferIpAddress: true
hostname: eureka-0
I am new to Kubernetes, so please suggest the changes.
Configuration after adding the PesistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: rtb
name: my-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostpath:
path: /run/desktop/mnt/host/c/Users/User/Documents/kubernetesbkp
---
# Define a 'Persistent Volume Claim'(PVC) for Storage, dynamically provisioned by cluster
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: rtb
name: service-registry-pv-claim # name of PVC essential for identifying the storage data
labels:
app: eureka
spec:
accessModes:
- ReadWriteOnce #This specifies the mode of the claim that we are trying to create.
resources:
requests:
storage: 1Gi #This will tell kubernetes about the amount of space we are trying to claim.
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: rtb
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: rtb
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
volumeMounts: # Mounting volume obtained from Persistent Volume Claim
- name: service-registry-persistent-storage
mountPath: /var/lib/eureka #This is the path in the container on which the mounting will take place.
volumes:
- name: service-registry-persistent-storage # Obtaining 'volume' from PVC
persistentVolumeClaim:
claimName: service-registry-pv-claim
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
If you are using kubernetes locally then first you have to create a PersistentVolume. Only then you can use the PersistentVolumeClaim to retrive the storage from the PV you created. Otherwise your PVC claim will be in a pending state. Because without PV the PersistentVolumeClaim did not know that from where it needs to pick up the volume.
So try creating the PersistentVolume like this
PersistentVolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostpath:
path: /tmp/ #Path where you want to allocate your PV in local
Now you can create the PVC and statefulset as per your requirement.
NOTE: Make Sure the PV storage is always greater or equal to the claim.
If you are using the docker dekstop kubernetes then hostpath will be different than mentioned above, refer this SO for it.
For more detailed information. Refer these links link1 link2
This is the below deployment configuration which worked for me, but just one thing, i applied all of the configurations one by one, otherwise it doesn't work on a single apply and the statefulset isn't created then. Also, i am mentioning it as a single configuration file here for convenience.
Would be helpful if someone can point out why it doesn't work in a single apply command. Thanks!
apiVersion: v1
kind: ConfigMap
metadata:
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
---
apiVersion: v1
kind: Service
metadata:
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761

Kubernetes poniting to oracle DB in separate VM

I am currently working ona kubernetes deployment,My application is running in Kubernetes cluster while my DB is running in a different VM.
apiVersion: apps/v1
kind: Deployment
metadata:
name: dcalln
spec:
selector:
matchLabels:
app: dcalln
replicas: 1
template:
metadata:
labels:
app: dcalln
spec:
containers:
- name: dcalln
image: "xxx.io/registry:1.0.88-ad3c142-2108190744"
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
labels:
app: dcalln
name: dcalln
namespace: testnamespace
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 1512
externalIPs:
- XXX.XXX.XXX.XXX
XXX.XXX.XXX.XXX is my oracle DB server. Its not part of kubernetes cluster.But I see the DB connection is not happening. Is there anything I am missing. How do I change my deployment specification to correctly point to DB

Kubernetes Ingress path based routing not working as expected

I installed NGINX Ingress in kubernetes cluster. When i am trying to access the micro service end via Ingress Controller its not working as expected
I have deployed two spring boot application
Ingress Rules
Path 1 -> /customer
Path 2 -> /prac
When i am trying to access one of the service ex.
http://test.practice.com/prac/practice/getprac , it does not work
but when i try to access without Ingress path http://test.practice.com/practice/getprac, it works
I am not able to understand why with Ingress path its not working and same happens for other service
Micro service 1 (Port 9090)
apiVersion: apps/v1
kind: Deployment
metadata:
name: customer
namespace: practice
labels:
app: customer
spec:
replicas: 5
selector:
matchLabels:
app: customer
template:
metadata:
labels:
app: customer
spec:
imagePullSecrets:
- name: testkuldeepsecret
containers:
- name: customer
image: kuldeep99/customer:v1
ports:
- containerPort: 9090
hostPort: 9090
---
apiVersion: v1
kind: Service
metadata:
name: customer-service
namespace: practice
labels:
spec:
ports:
- port: 9090
targetPort: 9090
protocol: TCP
name: http
selector:
app: customer
Micro service 2 (port 8000)
apiVersion: apps/v1
kind: Deployment
metadata:
name: prac
namespace: practice
labels:
app: prac
spec:
replicas: 4
selector:
matchLabels:
app: prac
template:
metadata:
labels:
app: prac
spec:
imagePullSecrets:
- name: testkuldeepsecret
containers:
- name: prac
image: kuldeep99/practice:v1
ports:
- containerPort: 8000
hostPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: prac-service
namespace: practice
labels:
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
name: http
selector:
app: prac
Service (customer-service and prac-service)
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
customer-service ClusterIP 10.97.203.19 <none> 9090/TCP 39m
ngtest ClusterIP 10.98.74.149 <none> 80/TCP 21h
prac-service ClusterIP 10.96.164.210 <none> 8000/TCP 15m
some-mysql ClusterIP None <none> 3306/TCP 2d16h
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: practice-ingress
namespace: practice
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: practice.example.com
http:
paths:
- backend:
serviceName: customer-service
servicePort: 9090
path: /customer
- backend:
serviceName: prac-service
servicePort: 8000
path: /prac
You have installed this nginx ingress
nginx.ingress.kubernetes.io/rewrite-target: / annotation to work properly you need to install this nginx ingress.
Alternative way to solve this issue is to configure contextPath to /prac in the spring application
On top the discussion, i observed one thing. We should not confuse with
apiVersion: networking.k8s.io/v1
kind: Ingress
And
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
First ensure which Ingress controller we are using and based on that decide apiVersion. I'm using "ingress-nginx" (not "nginx-ingress"). This one supports "apiVersion: networking.k8s.io/v1beta1" and works charm as per "Arsene" comment.
This Ingress yaml file WORKS with "ingress-nginx" Ingress controller
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: k8-exercise-03-two-app-ingress
spec:
rules:
- host: ex03.k8.sb.two.app.ingress.com
http:
paths:
- backend:
serviceName: k8-excercise-01-app-service
servicePort: 8080
path: /one(/|$)(.*)
- backend:
serviceName: k8-exercise-03-ms-service
servicePort: 8081
path: /two(/|$)(.*)
But, this Ingress yaml file NOT WORKING with "ingress-nginx" Ingress controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8-exercise-03-two-app-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
# nginx.ingress.kubernetes.io/use-regex: "true"
ingress.kubernetes.io/rewrite-target: /$2
spec:
# ingressClassName: nginx
rules:
#192.168.1.5 ex03.k8.sb.com is mapped in host file. 192.168.1.5 is Host machine IP
- host: ex03.k8.sb.two.app.ingress.com
http:
paths:
- backend:
service:
name: k8-excercise-01-app-service
port:
number: 8080
path: /one(/|$)(.*)
pathType: Prefix
- pathType: Prefix
path: /two(/|$)(.*)
backend:
service:
name: k8-exercise-03-ms-service
port:
number: 8081
I can access the Spring Boot API Calls as like:
For App-1:
http://ex03.k8.sb.two.app.ingress.com/one/
Result: App One - Root
http://ex03.k8.sb.two.app.ingress.com/one/one
Result: App One - One API
http://ex03.k8.sb.two.app.ingress.com/one/api/v1/hello
Result: App One - Hello API
App-2:
http://ex03.k8.sb.two.app.ingress.com/two/message/James%20Bond
Result: App Two- Hi James Bond API
Finally If any one knows how to change "apiVersion: networking.k8s.io/v1" yaml to support "ingress-nginx" Controller, will be appreciate. Thank you. Sorry for long content
I spend literally a day with this problem. The problem was simply the wrong nginx installed. I used helm found here to install nginx-ingress
Install it, please use helm version 3:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx
Once run, in the logs you shall see a snippet that illustrates how your ingress should look like. In case you want to do the above, you can the annotation suggested above and henceforth, you can follow tutorials here to achieve more such as rewrite.
My cluster is deployed on GCP using GKE
when done, this is the output log:
NAME: ingress-nginx
LAST DEPLOYED: Sat Apr 24 07:56:11 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w ingress-nginx-controller'
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
This is how it looks like now after installing it:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: example
# namespace: foo
spec:
rules:
- host: [your ip address].sslip.io
http:
paths:
- backend:
serviceName: registry-app-server
servicePort: 8761
path: /eureka/(.*)
- backend:
serviceName: api-gateway-server
servicePort: 7000
path: /api(/|$)(.*)
As you can see I am deploying spring micro-services using kubernetes(gke).
There are a lot of benefits of using nginx-ingress over built-in gke ingress, and it is more popular than its counterparts

Error 504 Gateway Time-out nginx-ingress controller

I’m setting a RKE cluster in an EC2 AWS instances, but I have a problem trying to set up a nginx ingress controller sometimes I got error when try to access it. the architecture I have is this:
The instance #1 it just a nginx server that perform a load balancer in each node, The # 2 and # 3 are a RKE node both has those roles:
- controlplane
- worker
- etcd
I have deployed two services/deployments. I trying to setup a nginx ingress controller to redirect the traffic to each service according to the path, but sometimes I just got 504 Gateway Time-out and others one load correctly. using hey to make a small load test I see that almost the 50% got the 504 error.
Status code distribution:
[200] 102 responses
[504] 98 responses
Debugging the nginx-ingress controller I see that one of them seems not reach the service, I think for that reason sometimes I got 504 error but I don’t know why.
2020/01/27 01:40:31 [error] 1767#1767: *128496 upstream timed out (110: Connection timed out) while connecting to upstream, client: 10.0.1.163, server: <host>, request: "GET /nginx HTTP/1.1", upstream: "http://10.42.1.4:80/", host: “<Host>"
The kubernetes configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: system-deployment
labels:
app: system
spec:
replicas: 1
selector:
matchLabels:
app: system
template:
metadata:
labels:
app: system
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory-deployment
labels:
app: inventory
spec:
replicas: 1
selector:
matchLabels:
app: inventory
template:
metadata:
labels:
app: inventory
spec:
containers:
- name: inventory-container
image: dockersamples/101-tutorial
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: system-service
spec:
selector:
app: system
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: inventory-service
spec:
selector:
app: inventory
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: root-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: <host>
http:
paths:
- path: /nginx
backend:
serviceName: system-service
servicePort: 80
- path: /
backend:
serviceName: inventory-service
servicePort: 80
My theory is that ingress-controller can’t reach the service in the other node for that I got the 504 Error, but As far as I know a service can accessed by any node in the cluster. someone knows what could happens here?
Thanks,
You probably need to allow traffic to your EC2 instance by creating security group in AWS EC2 dashboard.

deploy Laravel in kubernetes

I'm trying to deploy a laravel application in kubernetes at Google Cloud Platform.
I followed couple of tutorials and was successful trying them locally on a docker VM.
https://learnk8s.io/blog/kubernetes-deploy-laravel-the-easy-way
https://blog.cloud66.com/deploying-your-laravel-php-applications-with-cloud-66/
But when tried to deploy in kubernetes using an ingress to assign a domain name to the application. I keep getting the 502 bad gateway page.
I'm using a nginx ingress controller with image k8s.gcr.io/nginx-ingress-controller:0.8.3 and my ingress is as following
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- domainname.com
secretName: sslcertificate
rules:
- host: domain.com
http:
paths:
- backend:
serviceName: service
servicePort: 80
path: /
this is my application service
apiVersion: v1
kind: Service
metadata:
name: service
labels:
name: demo
version: v1
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
name: demo
type: NodePort
this is my ingress controller
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
# we expose 18080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
and here is my laravel application deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo-rc
labels:
name: demo
version: v1
spec:
strategy:
type: Recreate
template:
metadata:
labels:
name: demo
version: v1
spec:
containers:
- image: gcr.io/projectname/laravelapp:vx
name: app-pod
ports:
- containerPort: 8080
I tried to add the domain entry to the hosts file but with no luck !!
is there a specific configurations I have to add to the configmap.yaml file for the nginx ingress controller?
In short, to be able to reach your application via external domain name (singapore.smartlabplatform.com), you need to create a A DNS record for GCP L4 Load Balancer's external IP address (this is in other words EXTERNAL-IP of your default nginx-ingress-controller's Service), here seen as pending:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP
nginx-ingress-controller LoadBalancer 10.7.248.226 pending
nginx-ingress-default-backend ClusterIP 10.7.245.75 none
how to do this? it's explained on the GKE tutorials page here.
In the current state of your environment you can only reach your application in two ways:
From outside, via Load Balancer EXTERNAL-IP:
From inside, your Kubernetes cluster using laravel-kubernetes-demo service dns name:
$ curl laravel-kubernetes-demo.default.svc.cluster.local
<title>Laravel Kubernetes Demo :: LearnK8s</title>
If you want all that magic, like the automatic creation of DNS records, happen along with appearance of host: domain.com in your ingress resource spec, you should use external-dns (makes Kubernetes resources discoverable via public DNS servers), and here is the tutorial on how to set it up specifically for GKE.

Resources