Kubernetes logs don't print requests output when I use a port in an address - go

I've created a cluster with minikube
minikube start
Applied this yaml manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway-deployment
spec:
selector:
matchLabels:
app: gateway
replicas: 1
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: docker_gateway
imagePullPolicy: Never
ports:
- containerPort: 4001
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- protocol: TCP
port: 4001
And my GO app in the container docker_gateway is just a gin http server with one route
package main
import "github.com/gin-gonic/gin"
func main() {
r := gin.Default()
r.GET("/hello", func(c *gin.Context) {
c.JSON(200, gin.H{
"message": "hello",
})
})
server = &http.Server{
Addr: ":4001",
Handler: r,
}
server.ListenAndServe()
}
In Postman I make requests to 192.168.252.130:4001/hello and get a responses
But Kubernetes Pod's logs in Kubernetes don't print those requests. I expect to get this:
[GIN] 2019/10/25 - 14:17:20 | 200 | 1.115µs | 192.168.252.1| GET /hello
But an interesting thing is when I add Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: gateway
servicePort: 4001
I am able to make requests to 192.168.252.130/hello and 192.168.252.130:4001/hello
And without the port Pod's logs print requests, but with the port - they don't.
[GIN] 2019/10/25 - 14:19:13 | 200 | 2.433µs | 192.168.252.1| GET /hello

It's because you cannot access a kubernetes service of ClusterIP type from outside(in your case, outside of minikube) of the cluster.
Learn more about service types here
To access your service from outside, change your service to NodePort type.
Something like:
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- protocol: TCP
nodePort: 30036
port: 4001
type: NodePort
Then you will be able to access it at http://<minikube-ip>:30036/

Related

Is it possible to have a single ingress resource for all mulesoft applications in RTF in Self Managed Kubernetes on AWS?

Can we have a single ingress resource for deployment of all mulesoft applications in RTF in Self Managed Kubernetes on AWS?
Ingress template:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rtf-ingress
namespace: rtf
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/app-name(/|$)(.*) /$2 break;
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: rtf-nginx
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: temp1-svc
port:
number: 80
- pathType: Prefix
path: /
backend:
service:
name: temp2-svc
port:
number: 80
temp1-svc:
apiVersion: v1
kind: Service
metadata:
name: temp1-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: temp1-svc
temp2-svc:
apiVersion: v1
kind: Service
metadata:
name: temp2-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: temp2-svc
I am new to RTF, any changes to be done in Ingress resource or do we need to have separate ingress resource for each application? Any help would be appreciated.
Thanks
Generally managing different, ingress if good option.
You can also use the single ingress routing and forwarding traffic across the cluster.
Single ingress for all services
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rtf-ingress
namespace: rtf
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/app-name(/|$)(.*) /$2 break;
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: rtf-nginx
rules:
- host: example.com
http:
paths:
- pathType: ImplementationSpecific
path: /(.*)
backend:
service:
name: service
port:
number: 80
- pathType: ImplementationSpecific
path: /(.*)
backend:
service:
name: service-2
port:
number: 80
the benefit of multiple ingress resources or separate ingress is that you can keep and configure the different annotations to your ingress.
In one, you want to enable CORS while in another you want to change proxy body head etc. So it's better to manage ingress for each microservice.

Deploy Rest + gRPC server deploy to k8s with ingress

I have used a sample gRPC HelloWorld application https://github.com/grpc/grpc-go/tree/master/examples/helloworld. This example is running smoothly in local system.
I want to deploy it to kubernetes with use of Ingress.
Below are my config files.
service.yaml - as NodePort
apiVersion: v1
kind: Service
metadata:
name: grpc-scratch
labels:
run: grpc-scratch
annotations:
service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2"}'
spec:
type: NodePort
ports:
- name: grpc
port: 50051
protocol: TCP
targetPort: 50051
selector:
run: example-grpc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
nginx.org/grpc-services: "grpc"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: true
spec:
tls:
- hosts:
- xyz.com
secretName: grpc-secret
rules:
- host: xyz.com
http:
paths:
- path: /grpc
backend:
serviceName: grpc
servicePort: 50051
I am unable to make gRPC request to the server with url xyz.com/grpc. Getting the error
{
"error": "14 UNAVAILABLE: Name resolution failure"
}
If I make request to xyz.com the error is
{
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
}
Any help would be appreciated.
A backend of the ingress object is a combination of service and port names
In your case you have serviceName: grpc as a backend while your service's actual name is name: grpc-scratch
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
nginx.org/grpc-services: "grpc"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: true
spec:
tls:
- hosts:
- xyz.com
secretName: grpc-secret
rules:
- host: xyz.com
http:
paths:
- path: /grpc
backend:
serviceName: grpc-scratch
servicePort: grpc

Unable to access websocket over Kubernetes ingress

I have deployed two services to a Kubernetes Cluster on GCP:
One is a Spring Cloud Api Gateway implementation:
apiVersion: v1
kind: Service
metadata:
name: api-gateway
spec:
ports:
- name: main
port: 80
targetPort: 8080
protocol: TCP
selector:
app: api-gateway
tier: web
type: NodePort
The other one is a backend chat service implementation which exposes a WebSocket at /ws/ path.
apiVersion: v1
kind: Service
metadata:
name: chat-api
spec:
ports:
- name: main
port: 80
targetPort: 8080
protocol: TCP
selector:
app: chat
tier: web
type: NodePort
The API Gateway is exposed to internet through a Contour Ingress Controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-gateway-ingress
annotations:
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: "letsencrypt-prod"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- secretName: api-gateway-tls
hosts:
- api.mydomain.com.br
rules:
- host: api.mydomain.com.br
http:
paths:
- backend:
serviceName: api-gateway
servicePort: 80
The gateway routes incoming calls to /chat/ path to the chat service on /ws/:
#Bean
public RouteLocator routes(RouteLocatorBuilder builder) {
return builder.routes()
.route(r -> r.path("/chat/**")
.filters(f -> f.rewritePath("/chat/(?<segment>.*)", "/ws/(?<segment>.*)"))
.uri("ws://chat-api"))
.build();
}
When I try to connect to the WebSocket through the gateway I get a 403 error:
error: Unexpected server response: 403
I even tried to connect using http, https, ws and wss but the error remains.
Anyone has a clue?
I had the same issue using Ingress resource with Contour 0.5.0 but I managed to solve it by
upgrading Contour to v0.6.0-beta.3 with IngressRoute (be aware, though, that it's a beta version).
You can add an IngressRoute resource (crd) like this (remove your previous ingress resource):
#ingressroute.yaml
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: api-gateway-ingress
namespace: default
spec:
virtualhost:
fqdn: api.mydomain.com.br
tls:
secretName: api-gateway-tls
routes:
- match: /
services:
- name: api-gateway
port: 80
- match: /chat
enableWebsockets: true # Setting this to true enables websocket for all paths that match /chat
services:
- name: api-gateway
port: 80
Then apply it
Websockets will be authorized only on the /chat path.
See here for more detail about Contour IngressRoute.

Load balancing K8s Pods with operator-framework

I built a simple operator, by tweaking the memcached example. The only major difference is that I need two docker images in my pods. Got the deployment running. My test.yaml used to deploy with kubectl.
apiVersion: "cache.example.com/v1alpha1"
kind: "Memcached"
metadata:
name: "solar-demo"
spec:
size: 3
group: cache.example.com
names:
kind: Memcached
listKind: MemcachedList
plural: solar-demos
singular: solar-demo
scope: Namespaced
version: v1alpha1
I am still missing one piece though - load-balancing part. Currently, under Docker we are using the nginx image working as a reverse-proxy configured as:
upstream api_microservice {
server api:3000;
}
upstream solar-svc_microservice {
server solar-svc:3001;
}
server {
listen $NGINX_PORT default;
location /city {
proxy_pass http://api_microservice;
}
location /solar {
proxy_pass http://solar-svc_microservice;
}
root /html;
location / {
try_files /$uri /$uri/index.html /$uri.html /index.html=404;
}
}
I want my cluster to expose the port 8080 and forward to ports 3000 and 3001 to my images running inside Pods.
My deployment:
dep := &appsv1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1",
Kind: "Deployment",
},
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
},
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: ls,
},
Template: v1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: ls,
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Image: "shmukler/docker_solar-svc",
Name: "solar-svc",
Command: []string{"npm", "run", "start-solar-svc"},
Ports: []v1.ContainerPort{{
ContainerPort: 3001,
Name: "solar-svc",
}},
},
{
Image: "shmukler/docker_solar-api",
Name: "api",
Command: []string{"npm", "run", "start-api"},
Ports: []v1.ContainerPort{{
ContainerPort: 3000,
Name: "solar-api",
}},
},
},
},
},
}
What do I need to add have ingress or something running in front of my pods?
Thank you
What do I need to add have ingress or something running in front of my pods?
Yes, Ingress is designed for that kind of tasks.
Ingress has a path-based routing, which will be able to set up the same configuration as you mentioned in your example with Nginx. Moreover, one of the most popular implementations of Ingress is Nginx as a proxy.
Ingress is basically a set of rules that allows traffic, otherwise dropped or forwarded elsewhere, to reach the cluster services.
Here is an example of an Ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
spec:
rules:
- host: '' # Empty value means ‘any host’
http:
paths:
- path: /city
backend:
serviceName: myapp
servicePort: 3000
- path: /solar
backend:
serviceName: myapp
servicePort: 3001
Also, because a Pod is not a static thing, you should create a Service object which will be a static entry point of your application for Ingress.
Here is an example of the Service:
kind: Service
apiVersion: v1
metadata:
name: myapp
spec:
selector:
app: "NAME_OF_YOUR_DEPLOYMENT"
ports:
- name: city
protocol: TCP
port: 3000
targetPort: 3000
- name: solar
protocol: TCP
port: 3001
targetPort: 3001

go-micro kubernetes greeter example - unable to reach greeter api service

I'm trying to get this go-micro greeter example working on Kubernetes https://github.com/micro/examples/tree/master/greeter
I can run this locally in docker fine. However when I attempt to access the greeter api service via Kubernetes (http://{{external-ip}}/greeter/say/hello), I get the error: {"id":"go.micro.api","code":500,"detail":"not found","status":"Internal Server Error"}
For the sake of troubleshooting I've simplified the scenario, I simply want to be able to make a call via the micro api to a go-micro api service. Below is my setup:
api.go
package main
import (
"github.com/micro/go-micro"
api "github.com/micro/micro/api/proto"
"log"
k8s "github.com/micro/kubernetes/go/micro"
"context"
)
type Say struct {
}
//I just want to access this via the micro api on k8s via services external ip
func (s *Say) Hello(ctx context.Context, req *api.Request, rsp *api.Response) error {
rsp.StatusCode = 200
rsp.Body = "Hello"
return nil
}
func main() {
service := k8s.NewService(
micro.Name("default.greeter-api"),
)
service.Init()
service.Server().Handle(
service.Server().NewHandler(
&Say{},
),
)
if err := service.Run(); err != nil {
log.Fatal(err)
}
}
micro-api-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: micro
spec:
replicas: 1
selector:
matchLabels:
app: micro
template:
metadata:
labels:
app: micro
spec:
containers:
- name: micro
image: microhq/micro:kubernetes
args:
- "api"
- "--handler=rpc"
- "--namespace=default"
env:
- name: MICRO_API_ADDRESS
value: ":80"
ports:
- containerPort: 80
name: api-port
micro-api-svc.yml
apiVersion: v1
kind: Service
metadata:
name: micro
spec:
type: LoadBalancer
ports:
- name: api-http
port: 80
targetPort: "api-port"
protocol: TCP
selector:
app: micro
greet-deployment.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: greeter-api
spec:
replicas: 1
selector:
matchLabels:
app: greeter-api
template:
metadata:
labels:
app: greeter-api
spec:
containers:
- name: greeter-api-service
image: greeter-api:latest
imagePullPolicy: Always
command: [
"./greet",
"--selector=static",
"--server_address=:8080",
]
ports:
- containerPort: 8080
name: greet-port
greet-svc.yml
apiVersion: v1
kind: Service
metadata:
name: greet
labels:
app: greet
spec:
ports:
- port: 8080
protocol: TCP
selector:
app: greet
Everything is fine with your configs.
http://{{external-ip}}/greeter/say/hello), I get the error:
{"id":"go.micro.api","code":500,"detail":"not
found","status":"Internal Server Error"}
You missed just the port number 8080 in your request and tried to call rpc service instead of greeter-api .

Resources