I am trying to deploy a docker image which is in public repository. I am trying to create a loadbalancer service, and trying to expose the service in my system ip address, and not 127.0.0.1.
I am using a windows 10 , and my docker has WSL2 instead of hyper-v.
Below is my .yaml file. So, the service inside will run in port 4200, so to avoid any kind of confusion I was keeping all the ports in 4200.
apiVersion: v1
kind: Service
metadata:
name: hoopla
spec:
selector:
app: hoopla
ports:
- protocol: TCP
port: 4200
targetPort: 4200
clusterIP: 10.96.1.3
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.168.0.144
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: hoopla
name: hoopla
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: hoopla
template:
metadata:
labels:
app.kubernetes.io/name: hoopla
spec:
containers:
- image: pubrepo/myimg:latest
name: hoopla
ports:
- containerPort: 4200
Can anybody help me here to understand what mistake I am making. I basically want to expose this on my system IP address.
The loadBalancer service type require a cloud provider's load Balancer ( https://kubernetes.io/docs/concepts/services-networking/service/ )
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created
If you want to expose your service to your local machine, use kubernetes service nodePort type for example, and if you just want to test your webapp, you can use the kubernetes service clusterIp type and make a port-forward, for example with your clusterIp service:
kubectl port-forward svc/hoopla 4200:4200
Related
I can't for the life of me get this to connect.
It is a golang application using Kubernetes.
The docker file runs just fine, the pod launches but the connection times out.
apiVersion: v1
kind: Service
metadata:
name: ark-service
namespace: ark
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30008
selector:
app: ark-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ark-backend
namespace: ark
spec:
replicas: 1
selector:
matchLabels:
app: ark-api
template:
metadata:
labels:
app: ark-api
spec:
imagePullSecrets:
- name: regcred
containers:
- name: ark-api-container
image: xxx
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: web
containerPort: 8080
I am able to boot the docker container just fine and it runs.
Turns out the container gets terminated and I have no idea why.
You could check wheather the port 8080 is listening inside the container
kubectl exec -it <pod_namen> -n <namespace> -- netstat -ntpl
if there is no netstat command in the container, you could try to build a base image with it.
Check whether the port 30080 is listening on the node. Run the following command on the node
netstat -ntpl | grep 30080
Also you could try not to specify the node port in the service yaml, let the kubernetes to choose the nodeport for you. That could avoid to specify the port which is already using in your node.
apiVersion: v1
kind: Service
metadata:
name: ark-service
namespace: ark
spec:
type: ClusterIP
selector:
component: ark-api
ports:
- port: 80
targetPort: 8080
Try using clusterIP instead of nodeport, if you are using any kind of Ingress then you have to create rules in your ingress config so It can expose your service to the outside web via your load balancer.
I deleted the service and used port forwarding and was able to boot everything. I'll have to circle back to the service to try and figure it out.
Background
I am using Docker for Windows v20.10.6 (with Kubernetes enabled).
I have created two simple, out-of-the-box .NET 5.0 applications:
1. Web API (reaching through HTTP, listening on port 7070)
2. Web App (MVC) that shows a parsed table from the Web API (listening on port 80)
A. ✔️ Created a connection between the applications using Docker Swarm Mode
Created a swarm using docker swarm init
Created an 'overlay' driver network named personal-overlay.
Created the Web API service using docker service create –-network personal-overlay --name api webapi
Created the Web App service using docker service create --name web –-network personal-overlay -p 30080:80 webapp
B. ✔️ Created a generic NGINX deployment and service
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
tier: frontend
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
name: nginx
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx
service:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30080
selector:
app: myapp
I could access the NGINX through http://localhost:30080 without an issue (using the web browser).
❌ The issue I'm currently facing
Tagged the images test/api and test/web
Created the same files using those Visual Studio images:
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
tier: frontend
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
name: test-pod
labels:
app: myapp
spec:
containers:
- name: api
image: test/api
imagePullPolicy: Never
- name: web
image: test/web
imagePullPolicy: Never
service:
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30080
selector:
app: myapp
Yet, I can not access http://localhost:30080.
EDIT [1]:
I am trying to access it through the web browser, and I get an HTTP ERROR 500: "Failed to load resource: the server responded with a status of 500 (Internal Server Error)."
Whenever I am using curl -I http://localhost:30080 I get the following response:
HTTP/1.1 500 Internal Server Error
Date: Thu, 13 May 2021 08:20:25 GMT
Server: Kestrel
Content-Length: 0
EDIT [2]:
I even tried to scale it down into just this one pod (the web application).
pod:
apiVersion: v1
kind: Pod
metadata:
name: consumer-pod
labels:
name: consumer-pod
app: api-and-consumer
spec:
containers:
- name: consumer
image: test/web
imagePullPolicy: Never
ports:
- containerPort: 80
service:
apiVersion: v1
kind: Service
metadata:
name: consumer-external-svc
labels:
name: consumer-external-svc
app: api-and-consumer
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30080
selector:
name: consumer-pod
app: api-and-consumer
Yet it does not work (with nor without the ports section at the pod YAML file).
These are the logs I get using the kubectl logs web-pod-<fullname> command (which says it is actually listening on port 80):
←[40m←[1m←[33mwarn←[39m←[22m←[49m: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
←[40m←[1m←[33mwarn←[39m←[22m←[49m: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {70ddc140-9846-4052-b869-8bcc5250d39e} may be persisted to storage in unencrypted form.
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Now listening on: http://[::]:80
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Content root path: /app
I should also mention that using kubectl cluster-info dump I get the following line (for the service though, not the pod itself):
time="2021-05-13T10:56:35Z" level=error msg="Port 30080 for service web-external-svc is already opened by another service"
I want to figure out how does kubernetes knows which nodeport can be allocated when create a new service with nodeport type like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
- port: 80
targetPort: 80
I had search google and find these kubernetes soure code, but I don't understand how does it works.
https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/core/service/portallocator/allocator.go
The Nodeport is chosen randomly between 30000-32767. You can set it in the service definition.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
From the documentation: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
Update
The classes placed in the package kubernetes/pkg/registry/core/service/portallocator are responsible for allocating a node port for a service.
This test documents the behavior: https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/core/service/portallocator/operation_test.go
Kubernetes just takes a random port and if that one isn't free it takes the next one.
If you can read go the other classes in that package are a good starting point to understand the behavior.
I'm following the spring and kubernetes integration tutorial:
https://spring.io/guides/gs/spring-boot-kubernetes/
In my current scenario, I have 1 master and 2 workers servers.
When I deploy the file below using the command kubectl apply -f deployment.yaml, I can make a request from within the master server using kubectl port-forward svc/demo 8080:8080 and curl localhost:8080/actuator/health.
What I want to do is an external request (a public computer - my computer) to access the service that I created (kubernetes_master_ip:8080/actuator), but when I try this, I get "connection refused".
What is missing?
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: demo
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: demo
spec:
containers:
- image: springguides/demo
name: demo
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: demo
name: demo
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demo
type: ClusterIP
status:
loadBalancer: {}
You need to change the type of service to expose the application. There are two ways:
- LoadBalancer type: (Only on cloud providers)
- NodePort type: Can be done on-premise or minikube.
Change your service yaml to below:
apiVersion: v1
kind: Service
metadata:
labels:
app: demo
name: demo
spec:
ports:
- name: 8080-8080
port: 8080
nodePort: 31234
protocol: TCP
targetPort: 8080
selector:
app: demo
type: NodePort
Once the service is executed. check the application Node IP on which container is created.
kubectl get pods -o wide
then try to access the application at:
http://node_ip:31234/actuator
you can change your service type to load balancer. Which will expose your service via IP address to the external internet. Service type load balancer will only work with Cloud providers.
For more details you can visit : https://kubernetes.io/docs/concepts/services-networking/
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: demo
name: demo
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demo
type: LoadBalancer
save as yaml and execute it. it will provide the IP address.
You can access service via IP
Kubectl get svc
I created a service and use nodeport etc but couldn't access the service.
I created a web-service.yaml file with the following content and used kubectl to create the Service:
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
app: web-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
selector:
app: webserver
and the webserver.yaml file with the following Deployment details
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 3
template:
metadata:
labels:
run: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
In your deployment, label is run=webserver, but in your service, label is app=webserver. The service uses app=webserver as a Selector, through which it selects three pods that have the label "app" set to "webserver". In this case none of the pods has the label "app" so the deployment is not successfully exposed as a service. The label names and values in the deployment and service should match.