go-micro kubernetes greeter example - unable to reach greeter api service - go

I'm trying to get this go-micro greeter example working on Kubernetes https://github.com/micro/examples/tree/master/greeter
I can run this locally in docker fine. However when I attempt to access the greeter api service via Kubernetes (http://{{external-ip}}/greeter/say/hello), I get the error: {"id":"go.micro.api","code":500,"detail":"not found","status":"Internal Server Error"}
For the sake of troubleshooting I've simplified the scenario, I simply want to be able to make a call via the micro api to a go-micro api service. Below is my setup:
api.go
package main
import (
"github.com/micro/go-micro"
api "github.com/micro/micro/api/proto"
"log"
k8s "github.com/micro/kubernetes/go/micro"
"context"
)
type Say struct {
}
//I just want to access this via the micro api on k8s via services external ip
func (s *Say) Hello(ctx context.Context, req *api.Request, rsp *api.Response) error {
rsp.StatusCode = 200
rsp.Body = "Hello"
return nil
}
func main() {
service := k8s.NewService(
micro.Name("default.greeter-api"),
)
service.Init()
service.Server().Handle(
service.Server().NewHandler(
&Say{},
),
)
if err := service.Run(); err != nil {
log.Fatal(err)
}
}
micro-api-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: micro
spec:
replicas: 1
selector:
matchLabels:
app: micro
template:
metadata:
labels:
app: micro
spec:
containers:
- name: micro
image: microhq/micro:kubernetes
args:
- "api"
- "--handler=rpc"
- "--namespace=default"
env:
- name: MICRO_API_ADDRESS
value: ":80"
ports:
- containerPort: 80
name: api-port
micro-api-svc.yml
apiVersion: v1
kind: Service
metadata:
name: micro
spec:
type: LoadBalancer
ports:
- name: api-http
port: 80
targetPort: "api-port"
protocol: TCP
selector:
app: micro
greet-deployment.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: greeter-api
spec:
replicas: 1
selector:
matchLabels:
app: greeter-api
template:
metadata:
labels:
app: greeter-api
spec:
containers:
- name: greeter-api-service
image: greeter-api:latest
imagePullPolicy: Always
command: [
"./greet",
"--selector=static",
"--server_address=:8080",
]
ports:
- containerPort: 8080
name: greet-port
greet-svc.yml
apiVersion: v1
kind: Service
metadata:
name: greet
labels:
app: greet
spec:
ports:
- port: 8080
protocol: TCP
selector:
app: greet

Everything is fine with your configs.
http://{{external-ip}}/greeter/say/hello), I get the error:
{"id":"go.micro.api","code":500,"detail":"not
found","status":"Internal Server Error"}
You missed just the port number 8080 in your request and tried to call rpc service instead of greeter-api .

Related

Istio virtual service subset not able to send request to specific pods

I have following scenario
FastAPI (API Gateway)
Users (gRPC service)
below is the deployment yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: users
labels:
app: users
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: users
version: v1
template:
metadata:
labels:
app: users
version: v1
spec:
containers:
- image: users:v0.0.1
imagePullPolicy: Always
name: svc
ports:
- containerPort: 9090
---
kind: Service
apiVersion: v1
metadata:
name: users
labels:
app: users
spec:
selector:
app: users
ports:
- name: grpc
protocol: TCP
port: 9090
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi
labels:
app: fastapi
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: fastapi
version: v1
template:
metadata:
labels:
app: fastapi
version: v1
spec:
containers:
- image: fastapi:latest
imagePullPolicy: Always
name: web
ports:
- containerPort: 8080
env:
- name: USERS_SVC
value: 'users:9090'
---
kind: Service
apiVersion: v1
metadata:
name: fastapi
labels:
app: fastapi
spec:
selector:
app: fastapi
ports:
- port: 8080
name: http
After this I tried to test virtual service and route to users service (version: v2) when https header is passed. below is the codes for virtual service and destination rules
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: users-service-destination-rule
spec:
host: users
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: users-virtual-service
spec:
hosts:
- users
http:
- match:
- headers:
x-user-testing:
exact: tester
route:
- destination:
host: users
subset: v2
- route:
- destination:
host: users
subset: v1
Below is the deployment for user service (version v2)
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-v2
labels:
app: users
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: users
version: v2
template:
metadata:
labels:
app: users
version: v2
spec:
containers:
- image: users:v0.0.1
imagePullPolicy: Always
name: svc
ports:
- containerPort: 9090
When i passed header value, the request always goes to version v1.
curl -H "x-user-testing: tester" localhost/users
Can anyone help me please.
Thanks in advance

Springboot service deployed in AKS not working with ingress

I have a simple Springboot service, which works well when I configure Service of type:LoadBalancer. But when I use service of type:ClusterIP and introduce ingress, it does not work
For that matter, I am unable to get Ingress working for any of my deployments in Azure/AKS.
Please suggest what am I missing
Spring code
#RestController
#RequestMapping("/demo")
public class MyController {
#GetMapping("/welcome")
public String welcome() {
return "Hello Welcome";
}
}
LoadBalancer - working code
spring-microservice-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-microservice
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: spring-microservice
template:
metadata:
labels:
app: spring-microservice
spec:
containers:
- name: spring-microservice
image: babaacr.azurecr.io/welcome-service:1.0
resources:
requests:
memory: '256Mi'
cpu: '500m'
limits:
memory: '512Mi'
cpu: '1'
ports:
- name: http
containerPort: 8080
spring-microservice-service.yaml
apiVersion: v1
kind: Service
metadata:
name: spring-microservice
namespace: default
labels:
app: spring-microservice
spec:
selector:
app: spring-microservice
type: LoadBalancer
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
when I access the url http://20.XXX.XX.XX:8080/demo/welcome
message is printed
ingress based which is not working
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-microservice-x
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: spring-microservice-x
template:
metadata:
labels:
app: spring-microservice-x
spec:
containers:
- name: spring-microservice-x
image: babaacr.azurecr.io/welcome-service:1.0
resources:
requests:
memory: '256Mi'
cpu: '500m'
limits:
memory: '512Mi'
cpu: '1'
ports:
- name: http
containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: spring-microservice-x
namespace: default
labels:
app: spring-microservice-x
spec:
selector:
app: spring-microservice-x
type: ClusterIP
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spring-microservice-ingress
spec:
defaultBackend:
service:
name: spring-microservice-x
port:
number: 8080
when I access the url http://20.YYY.YY.YY:8080/demo/welcome
the page times-out
ingress controller is configured using below
helm install ingress-nginx ingress-nginx/ingress-nginx --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz

Kubernetes logs don't print requests output when I use a port in an address

I've created a cluster with minikube
minikube start
Applied this yaml manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway-deployment
spec:
selector:
matchLabels:
app: gateway
replicas: 1
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: docker_gateway
imagePullPolicy: Never
ports:
- containerPort: 4001
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- protocol: TCP
port: 4001
And my GO app in the container docker_gateway is just a gin http server with one route
package main
import "github.com/gin-gonic/gin"
func main() {
r := gin.Default()
r.GET("/hello", func(c *gin.Context) {
c.JSON(200, gin.H{
"message": "hello",
})
})
server = &http.Server{
Addr: ":4001",
Handler: r,
}
server.ListenAndServe()
}
In Postman I make requests to 192.168.252.130:4001/hello and get a responses
But Kubernetes Pod's logs in Kubernetes don't print those requests. I expect to get this:
[GIN] 2019/10/25 - 14:17:20 | 200 | 1.115µs | 192.168.252.1| GET /hello
But an interesting thing is when I add Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: gateway
servicePort: 4001
I am able to make requests to 192.168.252.130/hello and 192.168.252.130:4001/hello
And without the port Pod's logs print requests, but with the port - they don't.
[GIN] 2019/10/25 - 14:19:13 | 200 | 2.433µs | 192.168.252.1| GET /hello
It's because you cannot access a kubernetes service of ClusterIP type from outside(in your case, outside of minikube) of the cluster.
Learn more about service types here
To access your service from outside, change your service to NodePort type.
Something like:
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- protocol: TCP
nodePort: 30036
port: 4001
type: NodePort
Then you will be able to access it at http://<minikube-ip>:30036/

Istio - GKE - gRPC config stream closed; upstream connect error or disconnect/reset before headers. reset reason: connection failure

I am trying to my spring boot micro service in GKE Cluster with istio 1.1.5 latest version as of now. It throws error and pod never spins up. If I run it as a separate service in Kubernetes engine it works perfectly but with isito, it does not work. The purpose for using istio is to host multiple microservices and to use the feature istio provides. Here is my yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: revenue
spec:
replicas: 1
template:
metadata:
labels:
app: revenue-serv
tier: backend
track: stable
spec:
containers:
- name: backend
image: "gcr.io/finacials/revenue-serv:latest"
imagePullPolicy: Always
ports:
- containerPort: 8081
livenessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: revenue-serv
spec:
ports:
- port: 8081
#targetPort: 8081
#protocol: TCP
name: http
selector:
app: revenue-serv
tier: backend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /revenue/.*
backend:
serviceName: revenue-serv
servicePort: 8081
Thanks for your valuable feedback.
I have found the issue. I removed readynessProbe and livenessProbe and created ingressgateway and virtual service. It worked.
deployment & service:
#########################################################################################
# This is for deployment - Service & Deployment in Kubernetes ################
# Author: Arindam Banerjee ################
#########################################################################################
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: revenue-serv
namespace: dev
spec:
replicas: 1
template:
metadata:
labels:
app: revenue-serv
version: v1
spec:
containers:
- name: revenue-serv
image: "eu.gcr.io/rcup-mza-dev/revenue-serv:latest"
imagePullPolicy: Always
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: revenue-serv
namespace: dev
spec:
ports:
- port: 8081
name: http
selector:
app: revenue-serv
gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: worldcup-serv-gateway
namespace: dev
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: revenue-serv-virtualservice
namespace: dev
spec:
hosts:
- "*"
gateways:
- revenue-serv-gateway
http:
- route:
- destination:
host: revenue-serv

Load balancing K8s Pods with operator-framework

I built a simple operator, by tweaking the memcached example. The only major difference is that I need two docker images in my pods. Got the deployment running. My test.yaml used to deploy with kubectl.
apiVersion: "cache.example.com/v1alpha1"
kind: "Memcached"
metadata:
name: "solar-demo"
spec:
size: 3
group: cache.example.com
names:
kind: Memcached
listKind: MemcachedList
plural: solar-demos
singular: solar-demo
scope: Namespaced
version: v1alpha1
I am still missing one piece though - load-balancing part. Currently, under Docker we are using the nginx image working as a reverse-proxy configured as:
upstream api_microservice {
server api:3000;
}
upstream solar-svc_microservice {
server solar-svc:3001;
}
server {
listen $NGINX_PORT default;
location /city {
proxy_pass http://api_microservice;
}
location /solar {
proxy_pass http://solar-svc_microservice;
}
root /html;
location / {
try_files /$uri /$uri/index.html /$uri.html /index.html=404;
}
}
I want my cluster to expose the port 8080 and forward to ports 3000 and 3001 to my images running inside Pods.
My deployment:
dep := &appsv1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1",
Kind: "Deployment",
},
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
},
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: ls,
},
Template: v1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: ls,
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Image: "shmukler/docker_solar-svc",
Name: "solar-svc",
Command: []string{"npm", "run", "start-solar-svc"},
Ports: []v1.ContainerPort{{
ContainerPort: 3001,
Name: "solar-svc",
}},
},
{
Image: "shmukler/docker_solar-api",
Name: "api",
Command: []string{"npm", "run", "start-api"},
Ports: []v1.ContainerPort{{
ContainerPort: 3000,
Name: "solar-api",
}},
},
},
},
},
}
What do I need to add have ingress or something running in front of my pods?
Thank you
What do I need to add have ingress or something running in front of my pods?
Yes, Ingress is designed for that kind of tasks.
Ingress has a path-based routing, which will be able to set up the same configuration as you mentioned in your example with Nginx. Moreover, one of the most popular implementations of Ingress is Nginx as a proxy.
Ingress is basically a set of rules that allows traffic, otherwise dropped or forwarded elsewhere, to reach the cluster services.
Here is an example of an Ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
spec:
rules:
- host: '' # Empty value means ‘any host’
http:
paths:
- path: /city
backend:
serviceName: myapp
servicePort: 3000
- path: /solar
backend:
serviceName: myapp
servicePort: 3001
Also, because a Pod is not a static thing, you should create a Service object which will be a static entry point of your application for Ingress.
Here is an example of the Service:
kind: Service
apiVersion: v1
metadata:
name: myapp
spec:
selector:
app: "NAME_OF_YOUR_DEPLOYMENT"
ports:
- name: city
protocol: TCP
port: 3000
targetPort: 3000
- name: solar
protocol: TCP
port: 3001
targetPort: 3001

Resources