elasticsearch version 7.9 cannot upgrade to 8.0.0 - elasticsearch

the following is my elasticsearch deployment yaml file for version 7.9 :
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
selector:
matchLabels:
app: elasticsearch
replicas: 1
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.9.0
resources:
requests:
memory: 4Gi
limits:
memory: 5Gi
ports:
- containerPort: 9200
- containerPort: 9300
env:
- name: discovery.type
value: single-node
its quit working fine.But when I try to upgrade the elasticsearch version to 8.0.0 in the same above yaml image: elasticsearch:8.0.0 , I cannot access my elasticsearch with my URL. Or is there any changes to be made for e.s version 8.0.0.
Below is my service.yaml:
apiVersion: v1
kind: Service
metadata:
name: esservice
spec:
selector:
app: elasticsearch
type: NodePort
ports:
- port: 9200
targetPort: 9200
nodePort: 31200

Related

how can i deploy kibana for elasticsearch on kubernetes?

could anyone help me out how to deploy kibana on Kubernetes cluster and to connect with pre-existing elasticsearch ?? I couldn't find any appropriate doc on google
Here's a bare minimal to help you get started, just changed elasticsearch below to your own elasticsearch service name in your cluster.
apiVersion: v1
kind: Service
metadata:
name: kibana
spec:
ports:
- port: 5601
selector:
run: kibana
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: kibana
name: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200 # <-- change to your own es service url
image: docker.elastic.co/kibana/kibana:7.16.3
imagePullPolicy: IfNotPresent
name: kibana
ports:
- containerPort: 5601
restartPolicy: OnFailure

Expose Elastic APM through Ingress Controller

I have deployed elastic APM server into kubernetes and was trying to expose it through nginx ingress controller. Following is my configuration:
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: elastic
name: apm-server-config
labels:
k8s-app: apm-server
data:
apm-server.yml: |-
apm-server:
host: "0.0.0.0:8200"
setup.kibana:
enabled: "true"
host: "kibana:5601"
output.elasticsearch:
hosts: ["elastic:9200"]
---
#Deployment Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: apm-server
env: msprod
state: common
name: apm-server
namespace: elastic
spec:
replicas: 1
minReadySeconds: 10
selector:
matchLabels:
app: apm-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: apm-server
spec:
containers:
- image: docker.elastic.co/apm/apm-server:7.12.1
imagePullPolicy: Always
env:
- name: output.elasticsearch.hosts
value: "http://elastic:9200"
name: apm-server
ports:
- name: liveness-port
containerPort: 8200
volumeMounts:
- name: apm-server-config
mountPath: /usr/share/apm-server/apm-server.yml
readOnly: true
subPath: apm-server.yml
resources:
limits:
cpu: 250m
memory: 1024Mi
requests:
cpu: 100m
memory: 250Mi
volumes:
- name: apm-server-config
configMap:
name: apm-server-config
nodeSelector:
env: prod
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
#Service Configuration
apiVersion: v1
kind: Service
metadata:
labels:
app: apm-server
name: apm-server
namespace: elastic
spec:
ports:
- port: 8200
targetPort: 8200
name: http
nodePort: 31000
selector:
app: apm-server
sessionAffinity: None
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: elastic
name: gateway-ingress-apm
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /apm
backend:
serviceName: apm-server
servicePort: 8200
The pod is running and I am able to hit APM server using kubectl port-forward.
But when I am accessing the apm server with https://my.domain.com/apm then I am getting page not found error in browser and following error in APM pod:
{"log.level":"error","#timestamp":"2021-10-21T06:22:00.198Z","log.logger":"request","log.origin":{"file.name":"middleware/log_middleware.go","file.line":60},"message":"404 page not found","url.original":"/apm","http.request.method":"GET","user_agent.original":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36","source.address":"10.148.7.7","http.request.body.bytes":0,"http.request.id":"9294124a-5356-4b2c-ba8e-c0a589b23571","event.duration":110881,"http.response.status_code":404,"error.message":"404 page not found","ecs.version":"1.6.0"}
The error is coming because there is no context path configured in APM. I have gone through the APM documentation and couldn't find a way to configure context path in the apm server. Please help.
Posting this as answer out of comments.
Initial ingress rule passes the same path /apm to the APM service, which is confirmed by error in APM pod's logs - "message":"404 page not found","url.original":"/apm"
To fix it, nginx ingress has rewrite annotation. The way it works is described in the link with example.
Final ingress.yaml should look like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: elastic
name: gateway-ingress-apm
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2 # adding captured group
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /apm(/|$)(.*) # to have captured group works correctly
backend:
serviceName: apm-server
servicePort: 8200
What happens here is requests sent to my.domain.com/apm goes to the service on / path.
Captured group allows to preserve correct paths, for instance if the request goes to my.domain.com/apm/something, ingress will translate it to /something which will be passed to the service.

How can I configure different storage mount for different pod in Elasticsearch cluster in K8S?

I am deploying Elasticsearch cluster to K8S on EKS with nodegroup. I claimed a EBS for the cluster's storage. When I launch the cluster, only one pod is running successfully but I got this error for other pods:
Warning FailedAttachVolume 3m33s attachdetach-controller Multi-Attach error for volume "pvc-4870bd46-2f1e-402a-acf7-005de83e4588" Volume is already used by pod(s) es-0
Warning FailedMount 90s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[es-config persistent-storage default-token-pqzkp]: timed out waiting for the condition
It means the storage is already in use. I understand that this volume is used by the first pod so other pods can't use it. But I don't know how to use different mount path for different pod when they are using the same EBS volume.
Below is the full spec for the cluster.
apiVersion: v1
kind: ConfigMap
metadata:
name: es-config
data:
elasticsearch.yml: |
cluster.name: elk-cluster
network.host: "0.0.0.0"
bootstrap.memory_lock: false
# discovery.zen.minimum_master_nodes: 2
node.max_local_storage_nodes: 9
discovery.seed_hosts:
- es-0.es-entrypoint.default.svc.cluster.local
- es-1.es-entrypoint.default.svc.cluster.local
- es-2.es-entrypoint.default.svc.cluster.local
ES_JAVA_OPTS: -Xms4g -Xmx8g
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es
namespace: default
spec:
serviceName: es-entrypoint
replicas: 3
selector:
matchLabels:
name: es
template:
metadata:
labels:
name: es
spec:
volumes:
- name: es-config
configMap:
name: es-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
initContainers:
- name: permissions-fix
image: busybox
volumeMounts:
- name: persistent-storage
mountPath: /usr/share/elasticsearch/data
command: [ 'chown' ]
args: [ '1000:1000', '/usr/share/elasticsearch/data' ]
containers:
- name: es
image: elasticsearch:7.10.1
resources:
requests:
cpu: 2
memory: 8Gi
ports:
- name: http
containerPort: 9200
- containerPort: 9300
name: inter-node
volumeMounts:
- name: es-config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: persistent-storage
mountPath: /usr/share/elasticsearch/data
---
apiVersion: v1
kind: Service
metadata:
name: es-entrypoint
spec:
selector:
name: es
ports:
- port: 9200
targetPort: 9200
protocol: TCP
clusterIP: None
You should be using volumeClaimTemplates with statefulset so that each pod gets its own volume. Details:
volumeClaimTemplates:
- metadata:
name: es
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
# storageClassName: <omit to use default StorageClass or specify>

Kubernetes poniting to oracle DB in separate VM

I am currently working ona kubernetes deployment,My application is running in Kubernetes cluster while my DB is running in a different VM.
apiVersion: apps/v1
kind: Deployment
metadata:
name: dcalln
spec:
selector:
matchLabels:
app: dcalln
replicas: 1
template:
metadata:
labels:
app: dcalln
spec:
containers:
- name: dcalln
image: "xxx.io/registry:1.0.88-ad3c142-2108190744"
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
labels:
app: dcalln
name: dcalln
namespace: testnamespace
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 1512
externalIPs:
- XXX.XXX.XXX.XXX
XXX.XXX.XXX.XXX is my oracle DB server. Its not part of kubernetes cluster.But I see the DB connection is not happening. Is there anything I am missing. How do I change my deployment specification to correctly point to DB

How to run spring boot mysql application docker image on kubernetes?

My DockerFile looks like :
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
and my yml file looks like :
apiVersion: apps/v1
kind: Deployment
metadata:
name: imagename
namespace: default
spec:
replicas: 3
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: imagename
image: imagename:1.1
imagePullPolicy: Never
env:
- name: MYSQL_USER
value: root
ports:
- containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
name: imagename
namespace: default
spec:
type: NodePort
selector:
bb: web
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
i have build docker image using below command :
docker build -t dockerimage:1.1 .
and running the docker image like :
docker run -p 8080:8080 --network=host dockerimage:1.1
When i deploy this image in kubernetes environment i am getting error :
ERROR com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
Also i have done port forwarding :
Forwarding from 127.0.0.1:13306 -> 3306
Any suggestion what is wrong with the above configuration ?
you need to add a service type clusterIP to your database like that:
MySQL Service:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
tier: mysql
clusterIP: None
MySQL PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: my-db-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
MySQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql-deployment
spec:
selector:
matchLabels:
app: mysql-deployment
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-deployment
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Now on your Spring application what you need to access to the database is :
Spring Boot deployment
apiVersion: apps/v1 # API version
kind: Deployment # Type of kubernetes resource
metadata:
name: order-app-server # Name of the kubernetes resource
labels: # Labels that will be applied to this resource
app: order-app-server
spec:
replicas: 1 # No. of replicas/pods to run in this deployment
selector:
matchLabels: # The deployment applies to any pods mayching the specified labels
app: order-app-server
template: # Template for creating the pods in this deployment
metadata:
labels: # Labels that will be applied to each Pod in this deployment
app: order-app-server
spec: # Spec for the containers that will be run in the Pods
imagePullSecrets:
- name: testXxxxxsecret
containers:
- name: order-app-server
image: XXXXXX/order:latest
ports:
- containerPort: 8080 # The port that the container exposes
env: # Environment variables supplied to the Pod
- name: MYSQL_ROOT_USERNAME # Name of the environment variable
valueFrom: # Get the value of environment variable from kubernetes secrets
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_USERNAME
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_ROOT_URL
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
Create your Secret :
apiVersion: v1
kind: Secret
data:
MYSQL_ROOT_USERNAME: <BASE64-ENCODED-PASSWORD>
MYSQL_ROOT_URL: <BASE64-ENCODED-DB-NAME>
MYSQL_ROOT_USERNAME: <BASE64-ENCODED-DB-USERNAME>
MYSQL_ROOT_PASSWORD: <BASE64-ENCODED-DB-PASSWORD>
metadata:
name: mysql-secret
Spring Boot Service:
apiVersion: v1 # API version
kind: Service # Type of the kubernetes resource
metadata:
name: order-app-server-service # Name of the kubernetes resource
labels: # Labels that will be applied to this resource
app: order-app-server
spec:
type: LoadBalancer # The service will be exposed by opening a Port on each node and proxying it.
selector:
app: order-app-server # The service exposes Pods with label `app=polling-app-server`
ports: # Forward incoming connections on port 8080 to the target port 8080
- name: http
port: 8080

Resources