Consuming external API request from kubernetes pod - spring

I'm learning Kubernetes and i developed a simple spring-boot project that expose a simple API /getHomeSpace. When invoked, this API making a request to an external API that is the following: https://api.nasa.gov/planetary/apod. When i start-up my app with docker-compose it works perfectly. But when i set up kubernetes pod and service i receive the following error:
Error during callApod: I/O error on GET request for "https://api.nasa.gov/planetary/apod": No subject alternative DNS name matching api.nasa.gov found.; nested exception is javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching api.nasa.gov found.
My kubernetes config files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: antonio94c/view_api:11
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
#nodePort: 30000
selector:
component: server
What i'm missing??
I tried to read Kubernetes doc and stackoverflow related topics, but i didn't found a solution.

Related

Nginx ingress configuration for Kubernetes cluster hosted on windows

I am running Kubernetes cluster on my windows PC via Docker desktop. I am trying to create a very basic pod with a simple ingress configuration, but it doesn't seem to work. I thought the backend pod + service + ingress is a very basic setup, however I don't find a lot of help online. Please advise what I am doing wrong here.
My deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: test-cluster-ip
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 1234
targetPort: 80
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /testpath
pathType: Exact
backend:
service:
name: test-cluster-ip
port:
number: 1234
This is what I see when I access localhost from the browser
Also, I would like to ask if it is uncommon to run Kubernetes on windows even for testing (especially with ingress). I don't seem to find a lot of examples in the internet.
I thought the backend pod + service + ingress is a very basic setup, however I don't find a lot of help online. Please advise what I am doing wrong here.
It is indeed a very basic setup. And your k8s deployment/service/ingress yaml files are correct.
First, check if you installed NGINX ingress controller. If not, run:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
After that, you will be able to reach the k8s cluster using the following URL:
http://kubernetes.docker.internal/
But deploying ingress like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /testpath
pathType: Exact
backend:
service:
name: test-cluster-ip
port:
number: 1234
you are configuring the ingress to rewrite /testpath to the /. And requesting url without /testpath will return 404 status code.
See more rewrite examples here.
So, if you use the following URL, you will get the Nginx webpage from k8s deployment.
http://kubernetes.docker.internal/testpath

Kubernetes: spring cloud gateway not working

I have a spring cloud gateway that works fine in the docker configuration, like this:
(all routes/services except ratings are removed for readability's sake)
#Value("${hosts.ratings}")
private String ratingsPath;
#Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder) {
return builder.routes()
.route(r -> r.host("*").and().path("/api/ratings/**")
.uri(ratingsPath + ":2226/api/ratings/"))
...other routes...
.build();
}
This gets it values from the application.properties locally, and from an environment variable in docker, like so in the docker-compose:
apigw:
build: ./Api-Gateway
container_name: apigw
links:
- ratings
...
depends_on:
- ratings
...
ports:
- "80:8080"
environment:
- hosts_ratings=http://ratings
...
This configuration works just fine. However, when porting this to our kubernetes cluster, all routes get a 404.
The deployment of our api gateway is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: apigw
name: apigw-deployment
spec:
replicas: 1
selector:
matchLabels:
app: apigw
template:
metadata:
labels:
app: apigw
spec:
containers:
- name: apigw
image: redacted
ports:
- containerPort: 8080
env:
- name: hosts_ratings
value: "ratings-service.default.svc.cluster.local"
...
With ratings-service being our ratings service (that definitely works, because when exposing it directly from its service, it does work), defined like this:
apiVersion: v1
kind: Service
metadata:
name: ratings-service
labels:
app: ratings
spec:
selector:
app: ratings
ports:
- port: 2226
targetPort: 2226
The service of our api gateway is as follows, using bare metal with an external IP that does work:
apiVersion: v1
kind: Service
metadata:
name: apigw-service
labels:
app: apigw
spec:
selector:
app: apigw
ports:
- port: 80
targetPort: 8080
externalIPs:
- A.B.C.D
How I believe it should work is that ratings-service.default.svc.cluster.local would get translated to the correct ip, filled in to the ratingsPath variable, and the query would succeed, but this is not the case.
Our other services are able to communicate in the same way, but the api gateway does not seem to be able to do that.
What could be the problem?
Posting community wiki based on comment for better visibility. Feel free to expand it.
The issue was a faulty version of the image:
It seems like the service i was using just straight up didn't work. Must have been a faulty version of the image i was using.
Check also:
Access Services Running on Clusters | Kubernetes
Service | Kubernetes
Debug Services | Kubernetes
DNS for Services and Pods | Kubernetes

Visual Studio application is not exposed using a Kubernetes service

Background
I am using Docker for Windows v20.10.6 (with Kubernetes enabled).
I have created two simple, out-of-the-box .NET 5.0 applications:
1. Web API (reaching through HTTP, listening on port 7070)
2. Web App (MVC) that shows a parsed table from the Web API (listening on port 80)
A. ✔️ Created a connection between the applications using Docker Swarm Mode
Created a swarm using docker swarm init
Created an 'overlay' driver network named personal-overlay.
Created the Web API service using docker service create –-network personal-overlay --name api webapi
Created the Web App service using docker service create --name web –-network personal-overlay -p 30080:80 webapp
B. ✔️ Created a generic NGINX deployment and service
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
tier: frontend
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
name: nginx
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx
service:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30080
selector:
app: myapp
I could access the NGINX through http://localhost:30080 without an issue (using the web browser).
❌ The issue I'm currently facing
Tagged the images test/api and test/web
Created the same files using those Visual Studio images:
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
tier: frontend
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
name: test-pod
labels:
app: myapp
spec:
containers:
- name: api
image: test/api
imagePullPolicy: Never
- name: web
image: test/web
imagePullPolicy: Never
service:
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30080
selector:
app: myapp
Yet, I can not access http://localhost:30080.
EDIT [1]:
I am trying to access it through the web browser, and I get an HTTP ERROR 500: "Failed to load resource: the server responded with a status of 500 (Internal Server Error)."
Whenever I am using curl -I http://localhost:30080 I get the following response:
HTTP/1.1 500 Internal Server Error
Date: Thu, 13 May 2021 08:20:25 GMT
Server: Kestrel
Content-Length: 0
EDIT [2]:
I even tried to scale it down into just this one pod (the web application).
pod:
apiVersion: v1
kind: Pod
metadata:
name: consumer-pod
labels:
name: consumer-pod
app: api-and-consumer
spec:
containers:
- name: consumer
image: test/web
imagePullPolicy: Never
ports:
- containerPort: 80
service:
apiVersion: v1
kind: Service
metadata:
name: consumer-external-svc
labels:
name: consumer-external-svc
app: api-and-consumer
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30080
selector:
name: consumer-pod
app: api-and-consumer
Yet it does not work (with nor without the ports section at the pod YAML file).
These are the logs I get using the kubectl logs web-pod-<fullname> command (which says it is actually listening on port 80):
←[40m←[1m←[33mwarn←[39m←[22m←[49m: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
←[40m←[1m←[33mwarn←[39m←[22m←[49m: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {70ddc140-9846-4052-b869-8bcc5250d39e} may be persisted to storage in unencrypted form.
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Now listening on: http://[::]:80
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Content root path: /app
I should also mention that using kubectl cluster-info dump I get the following line (for the service though, not the pod itself):
time="2021-05-13T10:56:35Z" level=error msg="Port 30080 for service web-external-svc is already opened by another service"

Access kubernetes service externally

I'm following the spring and kubernetes integration tutorial:
https://spring.io/guides/gs/spring-boot-kubernetes/
In my current scenario, I have 1 master and 2 workers servers.
When I deploy the file below using the command kubectl apply -f deployment.yaml, I can make a request from within the master server using kubectl port-forward svc/demo 8080:8080 and curl localhost:8080/actuator/health.
What I want to do is an external request (a public computer - my computer) to access the service that I created (kubernetes_master_ip:8080/actuator), but when I try this, I get "connection refused".
What is missing?
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: demo
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: demo
spec:
containers:
- image: springguides/demo
name: demo
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: demo
name: demo
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demo
type: ClusterIP
status:
loadBalancer: {}
You need to change the type of service to expose the application. There are two ways:
- LoadBalancer type: (Only on cloud providers)
- NodePort type: Can be done on-premise or minikube.
Change your service yaml to below:
apiVersion: v1
kind: Service
metadata:
labels:
app: demo
name: demo
spec:
ports:
- name: 8080-8080
port: 8080
nodePort: 31234
protocol: TCP
targetPort: 8080
selector:
app: demo
type: NodePort
Once the service is executed. check the application Node IP on which container is created.
kubectl get pods -o wide
then try to access the application at:
http://node_ip:31234/actuator
you can change your service type to load balancer. Which will expose your service via IP address to the external internet. Service type load balancer will only work with Cloud providers.
For more details you can visit : https://kubernetes.io/docs/concepts/services-networking/
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: demo
name: demo
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demo
type: LoadBalancer
save as yaml and execute it. it will provide the IP address.
You can access service via IP
Kubectl get svc

How to expose deployment as a service in 2.1-ee?

I created a service and use nodeport etc but couldn't access the service.
I created a web-service.yaml file with the following content and used kubectl to create the Service:
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
app: web-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
selector:
app: webserver
and the webserver.yaml file with the following Deployment details
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 3
template:
metadata:
labels:
run: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
In your deployment, label is run=webserver, but in your service, label is app=webserver. The service uses app=webserver as a Selector, through which it selects three pods that have the label "app" set to "webserver". In this case none of the pods has the label "app" so the deployment is not successfully exposed as a service. The label names and values in the deployment and service should match.

Resources