How to deploy an eureka naming server on aks? - yaml

I want to deploy my web service to Azure Kubernetes Service. I currently have 3 microservices in one node. 1 API gateway and 2 backend microservices. Error messages: none. Because I do not use routes in the API gateway, I cannot control my backend microservice via the API gateway. Now I have created a Eureka Naming Server, and want to deloy him in the same node, that I want to use so that my microservices can communicate with each other.
This is my Yaml-File that i used for the Naming Server
apiVersion: apps/v1
kind: Deployment
metadata:
name: discoeryservice-front
spec:
replicas: 1
selector:
matchLabels:
app: discoveryservice-front
template:
metadata:
labels:
app: discoveryservice-front
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: discoveryservice-front
image: registry.azurecr.io/discoveryservice:16
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8762
name: discovery
---
apiVersion: v1
kind: Service
metadata:
name: discoveryservice-front
spec:
ports:
- port: 8762
selector:
app: discoveryservice-front
---
First of all I don't get an external IP address and I don't know why. Can someone tell me how to get an external IP for my naming server?
This is my Yaml-File for the rest of microservices
apiVersion: apps/v1
kind: Deployment
metadata:
name: apigateway-front
spec:
replicas: 1
selector:
matchLabels:
app: apigateway-front
template:
metadata:
labels:
app: apigateway-front
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: apigateway-front
image: registry.azurecr.io/apigateway:11
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8800
name: apigateway
---
apiVersion: v1
kind: Service
metadata:
name: apigateway-front
spec:
type: LoadBalancer
ports:
- port: 8800
selector:
app: apigateway-front
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: contacts-back
spec:
replicas: 1
selector:
matchLabels:
app: contacts-back
template:
metadata:
labels:
app: contacts-back
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: contacts-back
image: registry.azurecr.io/contacts:12
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8100
name: contacts-back
---
apiVersion: v1
kind: Service
metadata:
name: contacts-back
spec:
ports:
- port: 8100
selector:
app: contacts-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: templates-back
spec:
replicas: 1
selector:
matchLabels:
app: templates-back
template:
metadata:
labels:
app: templates-back
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: templates-back
image: registry.azurecr.io/templates:13
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8200
name: templates-back
---
apiVersion: v1
kind: Service
metadata:
name: templates-back
spec:
ports:
- port: 8200
selector:
app: templates-back
Secondly. Can someone tell me how I register my microservices with the naming server?
If I start my microservice without AKS and without Docker then the naming server works.

Since you are using kubernetes, the suggested approach would be to use k8s service discovery instead of eureka service discovery.
For API Gateway, you can use any k8s native api gateways like ambassador and kong which integrate very well with kubernetes.
To answer above questions
External IP is provided to services which are of type load balancer (I am assuming by external ip you meant IP address which can be used from outside of cluster).
To register to eureka service discovery, I guess you will need to make minor changes in the code (to inform eureka once the application is up in order to register it's instance).
If there is some doubt in above answers, please comment, will try to explain in depth too.

Related

failed to load plugin class in elasticsearch.xpack.security when upgrading to 7.9.0

I am using it in a kubernetes cluster as part of the a magento app. I need to upgrade my elastic version to 7.9.0. My current version is 7.8.0 and it is working:
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: elasticsearch
namespace: default
spec:
version: 7.8.0
nodeSets:
- name: elasticsearch
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc:
anonymous:
username: anonymous
roles: superuser
authz_exception: false
podTemplate:
spec:
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
resources:
requests:
memory: 1Gi
cpu: 0.5
limits:
memory: 1Gi
cpu: 1
http:
tls:
selfSignedCertificate:
disabled: true
When I change to 7.9.0, it crashes with the error:
failed to load plugin class [org.elasticsearch.xpack.security.Security
Any idea how to fix?
It was solved by using v1 instead of v1beta:
apiVersion: elasticsearch.k8s.elastic.co/v1

Exposing Kibana behind GCE ingress (UNHEALTHY state)

I'm trying to expose Kibana behind of a GCE ingress, but the ingress is reporting the kibana service as UNHEALTHY while it is healthy and ready. Just note that the healthcheck created by the Ingress is still using the default value HTTP on the root / and Port: ex:32021.
Changing the healthcheck in GCP console to HTTPS on /login and Port: 5601 doesn't change anything and the service is still reported as Unhealthy. The healthcheck port is also being overwritten to the original value, which is strange.
I'm using ECK 1.3.1 and below are my configs. I'm I missing anything? Thank you in advance.
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: d3m0
spec:
version: 7.10.1
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
---
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: d3m0
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: d3m0
podTemplate:
metadata:
labels:
kibana: node
spec:
containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: 1
readinessProbe:
httpGet:
scheme: HTTPS
path: "/login"
port: 5601
http:
service:
spec:
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: kibana-ingress
spec:
backend:
serviceName: d3m0-kb-http
servicePort: 5601
When using ECK, all the security feature are enabled on ES and Kibana, which means that their services do not accept HTTP traffic used by the default GCP loadbalancer Healthcheck. You must add the required annotations to the services and override the healthcheck paths as in the code below. Please find more details here.
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: d3m0
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: d3m0
http:
service:
metadata:
labels:
app: kibana
annotations:
# Enable TLS between GCLB and the application
cloud.google.com/app-protocols: '{"https":"HTTPS"}'
service.alpha.kubernetes.io/app-protocols: '{"https":"HTTPS"}'
# Uncomment the following line to enable container-native load balancing.
cloud.google.com/neg: '{"ingress": true}'
podTemplate:
metadata:
labels:
name: kibana-fleet
spec:
containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: 1
readinessProbe:
# Override the readiness probe as GCLB reuses it for its own healthchecks
httpGet:
scheme: HTTPS
path: "/login"
port: 5601

Kubernetes: Cant Increase Req/Sec with more Nodes

I created simple nodejs express application, it just listen single get port and returns hello word string.
Then i deployed that application on a kubernete cluster with 1 node with 4vCpu 8GB Ram. Routed all traffic to app with nginx ingress controller.
So after completed deployment, i performed simple http load test. Results was 70 - 100 req/sec. I tried to increase replica set to 6 but results was still same. I also tried to specify resource limits but nothing changed.
Lastly i added 2 more node to pool with 4vCpu 8GB Ram.
After that i performed load test again but results was still same. Total of 3 nodes with 12vCpu and 24GB Ram, it can barely handle 80 Req/Sec.
Results form load testing.
12 threads and 400 connections
Latency 423.68ms 335.38ms 1.97s 84.88%
Req/Sec 76.58 34.13 212.00 72.37%
4457 requests in 5.08s, 1.14MB read
Probably i am doing something wrong but i couldn't figure out.
This is my deployment and service yaml file.
apiVersion: v1
kind: Service
metadata:
name: app-3
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: app-3
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-3
spec:
replicas: 6
selector:
matchLabels:
app: app-3
template:
metadata:
labels:
app: app-3
spec:
containers:
- name: app-3-name
image: s1nc4p/node:v22
ports:
- containerPort: 8080
resources:
requests:
memory: "1024Mi"
cpu: "1000m"
limits:
memory: "1024Mi"
cpu: "1000m"
This is ingress service yaml file.
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
And this is ingress file.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: kkk.storadewebservices.com
http:
paths:
- path: '/'
backend:
serviceName: app-3
servicePort: 80

Impossible connect to elasticsearch in kubernetes(bare metal)

I've set up elastic + kibana + metricbeat in local cluster. But the metricbeat can't connect to elastic:
ERROR pipeline/output.go:100 Failed to connect to
backoff(elasticsearch(http://elasticsearch:9200)): Get http://elasticsearch:9200: lookup
elasticsearch on 10.96.0.10:53: no such host
2019-10-15T14:14:32.553Z INFO pipeline/output.go:93 Attempting to reconnect to
backoff(elasticsearch(http://elasticsearch:9200)) with 10 reconnect attempt(s)
2019-10-15T14:14:32.553Z INFO [publisher] pipeline/retry.go:189 retryer: send unwait-signal to consumer
2019-10-15T14:14:32.553Z INFO [publisher] pipeline/retry.go:191 done
2019-10-15T14:14:32.553Z INFO [publisher] pipeline/retry.go:166 retryer: send wait signal to consumer
2019-10-15T14:14:32.553Z INFO [publisher] pipeline/retry.go:168 done
2019-10-15T14:14:32.592Z WARN transport/tcp.go:53 DNS lookup failure "elasticsearch": lookup elasticsearch on 10.96.0.10:53: no such host
In my cluster I use metalldb and ingress. I've set up ingress rules but it didnt help me.
Also I've noticed that the elk and the metricbeat have different namespaces in docs. I've tried make everywhere the same namespaces but it was unsuccesfully.
Below I've attached my yamls. Files for elastic/kibana and metricbeat I didn't attach because they have a lot of lines, I wrote only ref on them:
elastic/kibana -
https://download.elastic.co/downloads/eck/1.0.0-beta1/all-in-one.yaml
metricbeat - https://raw.githubusercontent.com/elastic/beats/7.4/deploy/kubernetes/metricbeat-kubernetes.yaml
Maybe anybody know why it happens?
**elastic config** -
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.4.0
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # note: elasticsearch-data must be the name of the Elasticsearch volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: standard
http:
service:
spec:
type: LoadBalancer
**kibana config** -
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.4.0
count: 1
elasticsearchRef:
name: quickstart
http:
service:
spec:
type: LoadBalancer
tls:
selfSignedCertificate:
disabled: true
**ingress rules** -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: undemo-service
servicePort: 80
- path: /
backend:
serviceName: quickstart-kb-http
servicePort: 80
- path: /
backend:
serviceName: quickstart-es-http
servicePort: 80
Just to be aware. Filebeat, metricbeats... runs under kube-system namespace.
If you run elastic on default namespace you should use elasticsearch.default as host in order to resolve your service properly.

How to restore persistent disk in kubernetes statefulset?

I have a gcePersistentDisk with an elasticsearch index. When I delete and recreate the statefulset, I want to reuse this disk so the same index is available on the new statefulset.
I've managed to rebind the disk, my-es-disk, to a new statefulset, but when I connect to the container and check for the index, there are no indices.
I've reviewed Google and k8s docs, and it seems like the yamls are correct, but I still cannot retain the index. I am sure it's something simple! Thanks for your help!
My workflow:
1 - Create storage class
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zone: us-central1-a
reclaimPolicy: Retain
allowVolumeExpansion: true
2 - Create PersistentVolume and PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-es
spec:
storageClassName: ssd
capacity:
storage: 3G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: my-es-disk
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-es
spec:
storageClassName: ssd
volumeName: pv-es
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3G
3 - Deploy statefulSet
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elastic-data
labels:
app: elastic-data
area: devs
role: nosql
version: "6.1.4"
environment: elastic
spec:
serviceName: elastic-data
replicas: 1
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: elastic-data
area: devs
role: nosql
version: "6.1.4"
environment: elastic
spec:
terminationGracePeriodSeconds: 10
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: elastic-data
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.4
resources:
requests:
memory: "512Mi"
limits:
memory: "1024Mi"
ports:
- containerPort: 9300
name: transport
- containerPort: 9200
name: http
volumeMounts:
- name: data-volume
mountPath: /usr/share/elasticsearch/data
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: pv-claim-es

Resources