I am trying to run a legacy php larval app on my EKS cluster.
I have containerized the application via the docker file below
FROM php:7.2-fpm
RUN apt-get update -y \
&& apt-get install -y nginx
# PHP_CPPFLAGS are used by the docker-php-ext-* scripts
ENV PHP_CPPFLAGS="$PHP_CPPFLAGS -std=c++11"
RUN docker-php-ext-install pdo_mysql \
&& docker-php-ext-install opcache \
&& apt-get install libicu-dev -y \
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl \
&& apt-get remove libicu-dev icu-devtools -y
RUN { \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.interned_strings_buffer=8'; \
echo 'opcache.max_accelerated_files=4000'; \
echo 'opcache.revalidate_freq=2'; \
echo 'opcache.fast_shutdown=1'; \
echo 'opcache.enable_cli=1'; \
} > /usr/local/etc/php/conf.d/php-opocache-cfg.ini
COPY nginx-site.conf /etc/nginx/sites-enabled/default
COPY entrypoint.sh /etc/entrypoint.sh
COPY --chown=www-data:www-data . /var/www/mysite
RUN chmod +x /etc/entrypoint.sh
WORKDIR /var/www/mysite
EXPOSE 9000
ENTRYPOINT ["sh", "/etc/entrypoint.sh"]
And the nginx-site.conf
server {
root /var/www/mysite/web;
include /etc/nginx/default.d/*.conf;
index app.php index.php index.html index.htm;
client_max_body_size 30m;
location / {
try_files $uri $uri/ /app.php$is_args$args;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_pass 127.0.0.1:9000;
fastcgi_index app.php;
include fastcgi.conf;
}
}
The docker-compose.yaml
version: '3'
services:
proxy:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf
web:
image: nginx:latest
expose:
- "9000"
volumes:
- ./source:/source
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
php:
build:
context: .
dockerfile: php/Dockerfile
volumes:
- ./source:/source
I have deployed nginx ingress controller provided by the official Kubernetes GitHub via helm and it looks something like this.
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-3.35.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: my-ing
app.kubernetes.io/version: "0.48.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: my-ing-ingress-nginx-controller
namespace: ingress
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: my-ing
app.kubernetes.io/component: controller
replicas: 1
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: my-ing
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: "k8s.gcr.io/ingress-nginx/controller:v0.48.1#sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899"
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/my-ing-ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=external-nginx
- --configmap=$(POD_NAMESPACE)/my-ing-ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: metrics
containerPort: 10254
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- ingress-nginx
topologyKey: kubernetes.io/hostname
serviceAccountName: my-ing-ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: my-ing-ingress-nginx-admission
I have deployed my application via yaml like the one below.
apiVersion: v1
kind: Namespace
metadata:
name: tardis
labels:
monitoring: prometheus
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: website
namespace: tardis
spec:
replicas: 2
selector:
matchLabels:
app: website
template:
metadata:
labels:
app: website
spec:
containers:
- name: website
image: image directory on ecr
ports:
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
namespace: tardis
name: website
spec:
type: ClusterIP
ports:
- name: http
port: 9000
selector:
app: website
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: website
namespace: tardis
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: external-nginx
rules:
- host: home.tardis.kr
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: website
port:
number: 9000
When I check the log of the pods, it seems to indicate that it is running fine.
[27-Apr-2022 23:33:23] NOTICE: fpm is running, pid 25
[27-Apr-2022 23:33:23] NOTICE: ready to handle connections
However, when I try to access the address on mentioned in my ingress, it gives a "502 Bad Gateway Nginx" error.
I am really new to devops and even newer to php or larval.
If there is anything wrong with the way I containerized the application, or the way I deployed it, any sort of feedback would be much appreciated!!
Thank you in advance!!
Think this was a problem had to do something with the dockerfile. I followed the instruction and it seems to work fine.
Related
I've read a number of similar questions on here and blogs online, I've tried a number of configuration changes but cannot seem to get anything to work. I'm using ECK to manage an elastic & kibana stack on IBM cloud IKS (classic).
I want to use App ID as an oauth2 provider with an ingress running nginx for authentication. I have that part partially working, I get the SSO login and have to authenticate there successfully, but instead of being redirected to kibana application landing page I get redirected to the kibana login page. I am using helm to manage the Elastic, Kibana and Ingress resources. I will template the resources and put the yaml manifests here with some dummy values.
helm template --name-template=es-kibana-ingress es-k-stack -s templates/kibana.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME > kibana_template.yaml
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: es-kibana-ingress-es-k-stack
spec:
config:
server.rewriteBasePath: true
server.basePath: /kibana-es-kibana-ingress
server.publicBaseUrl: https://CLUSTER.REGION.containers.appdomain.cloud/kibana-es-kibana-ingress
version: 7.16.3
count: 1
elasticsearchRef:
name: es-kibana-ingress-es-k-stack
podTemplate:
spec:
containers:
- name: kibana
readinessProbe:
httpGet:
scheme: HTTPS
path: /kibana-es-kibana-ingress
port: 5601
helm template --name-template=es-kibana-ingress es-k-stack -s templates/ingress.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME > kibana_ingress_template.yaml
kind: Ingress
metadata:
name: es-kibana-ingress
namespace: es-kibana-ingress
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2-APPID_INSTANCE_NAME/start?rd=$escaped_request_uri
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2-APPID_INSTANCE_NAME/auth
nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $name_upstream_1 $upstream_cookie__oauth2_APPID_INSTANCE_NAME_1;
auth_request_set $access_token $upstream_http_x_auth_request_access_token;
auth_request_set $id_token $upstream_http_authorization;
access_by_lua_block {
if ngx.var.name_upstream_1 ~= "" then
ngx.header["Set-Cookie"] = "_oauth2_APPID_INSTANCE_NAME_1=" .. ngx.var.name_upstream_1 .. ngx.var.auth_cookie:match("(; .*)")
end
if ngx.var.id_token ~= "" and ngx.var.access_token ~= "" then
ngx.req.set_header("Authorization", "Bearer " .. ngx.var.access_token .. " " .. ngx.var.id_token:match("%s*Bearer%s*(.*)"))
end
}
nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- CLUSTER.REGION.containers.appdomain.cloud
secretName: CLUSTER_SECRET
rules:
- host: CLUSTER.REGION.containers.appdomain.cloud
http:
paths:
- backend:
service:
name: es-kibana-ingress-xdr-datalake-kb-http
port:
number: 5601
path: /kibana-es-kibana-ingress
pathType: ImplementationSpecific
helm template --name-template=es-kibana-ingress ~/Git/xdr_datalake/helm/xdr-es-k-stack/ -s templates/elasticsearch.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME > elastic_template.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: es-kibana-ingress-es-k-stack
spec:
version: 7.16.3
nodeSets:
- name: master
count: 1
config:
node.store.allow_mmap: true
node.roles: ["master"]
xpack.ml.enabled: true
reindex.remote.whitelist: [CLUSTER.REGION.containers.appdomain.cloud:443]
indices.query.bool.max_clause_count: 3000
xpack:
license.self_generated.type: basic
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: ibmc-file-retain-gold-custom-terraform
podTemplate:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
topologyKey: kubernetes.io/zone
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
volumes:
- name: elasticsearch-data
emptyDir: {}
containers:
- name: elasticsearch
resources:
limits:
cpu: 4
memory: 6Gi
requests:
cpu: 2
memory: 3Gi
env:
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NETWORK_HOST
value: _site_
- name: MAX_LOCAL_STORAGE_NODES
value: "1"
- name: DISCOVERY_SERVICE
value: elasticsearch-discovery
- name: HTTP_CORS_ALLOW_ORIGIN
value: '*'
- name: HTTP_CORS_ENABLE
value: "true"
- name: data
count: 1
config:
node.roles: ["data", "ingest", "ml", "transform"]
reindex.remote.whitelist: [CLUSTER.REGION.containers.appdomain.cloud:443]
indices.query.bool.max_clause_count: 3000
xpack:
license.self_generated.type: basic
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: ibmc-file-retain-gold-custom-terraform
podTemplate:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
topologyKey: kubernetes.io/zone
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
volumes:
- name: elasticsearch-data
emptyDir: {}
containers:
- name: elasticsearch
resources:
limits:
cpu: 4
memory: 6Gi
requests:
cpu: 2
memory: 3Gi
env:
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NETWORK_HOST
value: _site_
- name: MAX_LOCAL_STORAGE_NODES
value: "1"
- name: DISCOVERY_SERVICE
value: elasticsearch-discovery
- name: HTTP_CORS_ALLOW_ORIGIN
value: '*'
- name: HTTP_CORS_ENABLE
value: "true"
Any pointers would be greatly appreciated. I'm sure it's something small that I'm missing but I cannot find it anywhere online - I think I'm missing some token or authorization header rewrite, but I cannot figure it out.
So this comes down to a misunderstanding on my behalf. On previous self-managed ELK stacks the above worked, the difference is that on ECK security is enabled by default. So when you have your nginx reverse proxy set up to provide SAML integration correctly (as above) you still get the kibana login page.
To circumvent this I set up a filerealm for authentication purposes and provided a username/password for a kibana admin user:
helm template --name-template=es-kibana-ingress xdr-es-k-stack -s templates/crd_kibana.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME --set kibana.kibanaUser="kibanaUSER" --set kibana.kibanaPass="kibanaPASS"
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: es-kibana-ingress-xdr-datalake
namespace: default
spec:
config:
server.rewriteBasePath: true
server.basePath: /kibana-es-kibana-ingress
server.publicBaseUrl: https://CLUSTER.REGION.containers.appdomain.cloud/kibana-es-kibana-ingress
server.host: "0.0.0.0"
server.name: kibana
xpack.security.authc.providers:
anonymous.anonymous1:
order: 0
credentials:
username: kibanaUSER
password: kibanaPASS
version: 7.16.3
http:
tls:
selfSignedCertificate:
disabled: true
count: 1
elasticsearchRef:
name: es-kibana-ingress-xdr-datalake
podTemplate:
spec:
containers:
- name: kibana
readinessProbe:
timeoutSeconds: 30
httpGet:
scheme: HTTP
path: /kibana-es-kibana-ingress/app/dev_tools
port: 5601
resources:
limits:
cpu: 3
memory: 1Gi
requests:
cpu: 3
memory: 1Gi
helm template --name-template=es-kibana-ingress xdr-es-k-stack -s templates/crd_elasticsearch.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME --set kibana.kibanaUser="kibanaUSER" --set kibana.kibanaPass="kibanaPASS"
You may noticed I removed self signed certs - this was due to an issue connecting kafka to ES on the cluster. We have decided to use ISTIO to provide internal network connection - but if you don't have this issue you could keep them. I also had to update the ingress a bit to work with this new HTTP backend (previously HTTPS):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: es-kibana-ingress-kibana
namespace: default
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2-APPID_INSTANCE_NAME/start?rd=$escaped_request_uri
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2-APPID_INSTANCE_NAME/auth
nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $name_upstream_1 $upstream_cookie__oauth2_APPID_INSTANCE_NAME_1;
auth_request_set $access_token $upstream_http_x_auth_request_access_token;
auth_request_set $id_token $upstream_http_authorization;
access_by_lua_block {
if ngx.var.name_upstream_1 ~= "" then
ngx.header["Set-Cookie"] = "_oauth2_APPID_INSTANCE_NAME_1=" .. ngx.var.name_upstream_1 .. ngx.var.auth_cookie:match("(; .*)")
end
if ngx.var.id_token ~= "" and ngx.var.access_token ~= "" then
ngx.req.set_header("Authorization", "Bearer " .. ngx.var.access_token .. " " .. ngx.var.id_token:match("%s*Bearer%s*(.*)"))
end
}
nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- CLUSTER.REGION.containers.appdomain.cloud
secretName: CLUSTER_SECRET
rules:
- host: CLUSTER.REGION.containers.appdomain.cloud
http:
paths:
- backend:
service:
name: es-kibana-ingress-xdr-datalake-kb-http
port:
number: 5601
path: /kibana-es-kibana-ingress
pathType: ImplementationSpecific
Hopefully this helps someone else in the future.
I have an Elasticsearch cluster (6.3) running on Kubernetes (GKE) with the following manifest file:
---
# Source: elasticsearch/templates/manifests.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-configmap
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
data:
elasticsearch.yml: |
cluster.name: "${CLUSTER_NAME}"
node.name: "${NODE_NAME}"
path.data: /usr/share/elasticsearch/data
path.repo: ["${BACKUP_REPO_PATH}"]
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: ${DISCOVERY_SERVICE}
log4j2.properties: |
status = error
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
labels: &ElasticsearchDeploymentLabels
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
selector:
matchLabels: *ElasticsearchDeploymentLabels
serviceName: elasticsearch-svc
replicas: 2
updateStrategy:
# The procedure for updating the Elasticsearch cluster is described at
# https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html
type: OnDelete
template:
metadata:
labels: *ElasticsearchDeploymentLabels
spec:
terminationGracePeriodSeconds: 180
initContainers:
# This init container sets the appropriate limits for mmap counts on the hosting node.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
- name: set-max-map-count
image: marketplace.gcr.io/google/elasticsearch/ubuntu16_04#...
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
command:
- /bin/bash
- -c
- 'if [[ "$(sysctl vm.max_map_count --values)" -lt 262144 ]]; then sysctl -w vm.max_map_count=262144; fi'
containers:
- name: elasticsearch
image: eu.gcr.io/projectId/elasticsearch6.3#sha256:...
imagePullPolicy: Always
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: "elasticsearch-cluster"
- name: DISCOVERY_SERVICE
value: "elasticsearch-svc"
- name: BACKUP_REPO_PATH
value: ""
ports:
- name: prometheus
containerPort: 9114
protocol: TCP
- name: http
containerPort: 9200
- name: tcp-transport
containerPort: 9300
volumeMounts:
- name: configmap
mountPath: /etc/elasticsearch/elasticsearch.yml
subPath: elasticsearch.yml
- name: configmap
mountPath: /etc/elasticsearch/log4j2.properties
subPath: log4j2.properties
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
readinessProbe:
httpGet:
path: /_cluster/health?local=true
port: 9200
initialDelaySeconds: 5
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -x
- "java"
initialDelaySeconds: 5
resources:
requests:
memory: "2Gi"
- name: prometheus-to-sd
image: marketplace.gcr.io/google/elasticsearch/prometheus-to-sd#sha256:8e3679a6e059d1806daae335ab08b304fd1d8d35cdff457baded7306b5af9ba5
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=elasticsearch:http://localhost:9114/metrics
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
- --monitored-resource-types=k8s
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: configmap
configMap:
name: "elasticsearch-configmap"
volumeClaimTemplates:
- metadata:
name: elasticsearch-pvc
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: standard
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-prometheus-svc
labels:
app.kubernetes.io/name: elasticsearch
app.kubernetes.io/component: elasticsearch-server
spec:
clusterIP: None
ports:
- name: prometheus-port
port: 9114
protocol: TCP
selector:
app.kubernetes.io/name: elasticsearch
app.kubernetes.io/component: elasticsearch-server
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-svc-internal
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
ports:
- name: http
port: 9200
- name: tcp-transport
port: 9300
selector:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: ilb-service-elastic
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: elasticsearch-svc
spec:
type: LoadBalancer
loadBalancerIP: some-ip-address
selector:
app.kubernetes.io/component: elasticsearch-server
app.kubernetes.io/name: elasticsearch
ports:
- port: 9200
protocol: TCP
This manifest was written from the template that used to be available on the GCP marketplace.
I'm encountering the following issue: the cluster is supposed to have 2 nodes, and indeed 2 pods are running.
However
a call to ip:9200/_nodes returns just one node
there still seems to be a second node running that receives traffic (at least, read traffic), as visible in the logs. Those requests typically fail because the requested entities don't exist on that node (just on the master node).
I can't wrap my head around the fact that the node at the same time isn't visible to the master node, and receives read traffic from the load balanced pointing to the stateful set.
Am I missing something subtle ?
Did you try checking which types of both Nodes are?
There are Master nodes and data nodes, at a time only one master gets elected while the other just stay in the background if the first master node goes down new Node gets elected and handles the further request.
i cant see Node type config in stateful sets. i would recommand checking out the helm of Elasticsearch to set up and deploy on GKE.
Helm chart : https://github.com/elastic/helm-charts/tree/main/elasticsearch
Sharing example Env config for reference :
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: my-es
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "false"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
read more at : https://faun.pub/https-medium-com-thakur-vaibhav23-ha-es-k8s-7e655c1b7b61
I love elastic search so on my new project I have been trying to make it work on Kubernetes and skaffold
this is the yaml file I wrote:
apiVersion: apps/v1
kind: Deployment
metadata:
name: eks-depl
spec:
replicas: 1
selector:
matchLabels:
app: eks
template:
metadata:
labels:
app: eks
spec:
containers:
- name: eks
image: elasticsearch:7.17.0
---
apiVersion: v1
kind: Service
metadata:
name: eks-srv
spec:
selector:
app: eks
ports:
- name: db
protocol: TCP
port: 9200
targetPort: 9200
- name: monitoring
protocol: TCP
port: 9300
targetPort: 9300
After I run skaffold dev it shows to be working by Kubernetes but after a few seconds it crashes and goes down.
I can't understand what I am doing wrong.
After I have updated my config files as Mr. Harsh Manvar it worked like a charm but currently I am facing another issue. The client side says the following....
Btw I am using ElasticSearch version 7.11.1 and Client side module "#elastic/elasticsearch^7.11.1"
Here is example YAML file you should consider running if you are planning to run the single Node elasticsearch cluster on the Kubernetes
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: elasticsearch
component: elasticsearch
release: elasticsearch
name: elasticsearch
namespace: default
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: elasticsearch
component: elasticsearch
release: elasticsearch
serviceName: elasticsearch
template:
metadata:
labels:
app: elasticsearch
component: elasticsearch
release: elasticsearch
spec:
containers:
- env:
- name: cluster.name
value: es_cluster
- name: ELASTIC_PASSWORD
value: xyz-xyz
- name: discovery.type
value: single-node
- name: path.repo
value: backup/es-backup
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
- name: bootstrap.memory_lock
value: "false"
- name: xpack.security.enabled
value: "true"
image: elasticsearch:7.3.2
imagePullPolicy: IfNotPresent
name: elasticsearch
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
resources:
limits:
cpu: 451m
memory: 1250Mi
requests:
cpu: 250m
memory: 1000Mi
securityContext:
privileged: true
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-data
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
- sysctl -w vm.max_map_count=262144
- chmod 777 /usr/share/elasticsearch/data
- chomod 777 /usr/share/elasticsearch/data/node
- chmod g+rwx /usr/share/elasticsearch/data
- chgrp 1000 /usr/share/elasticsearch/data
image: busybox:1.29.2
imagePullPolicy: IfNotPresent
name: set-dir-owner
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-data
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 10
updateStrategy:
type: OnDelete
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeMode: Filesystem
i would also recommand checking out the helm charts of the elasticsearch :
1 . https://github.com/elastic/helm-charts/tree/master/elasticsearch
2. https://github.com/helm/charts/tree/master/stable/elasticsearch
you can expose the above stateful set using the service and use the further with the application.
I'm trying to set up an NSQ cluster in Kubernetes and having issues.
Basically, I want to scale out NSQ and NSQ Lookup. I have a stateful set(2 nodes) definition for both of them. To not post the whole YAML file, I'll post only part of it for NSQ
NSQ container template
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd.default.svc.cluster.local:4160
here nsqlookupd.default.svc.cluster.local is a K8s headless service, by doing this I'm expecting the NSQ instance to open a connection with all of the NSQ Lookup instances, which in fact is not happening. It just opens a connection with a random one. However, if I explicitly list all of the NSQ Lookup hosts like this, it works.
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd-0.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-1.nsqlookupd:4160
I also wanted to use the headless service DNS name in --broadcast-address for both NSQ and NSQ Lookup but that doesn't work as well.
I'm using nsqio go library for Publishing and Consuming messages, and it looks like I can't use headless service there as well and should explicitly list NSQ/NSQ Lookup pod names when initializing a consumer or publisher.
Am I using this in the wrong way?
I mean I want to have horizontally scaled NSQ and NSQLookup instances and not hardcode the addresses.
you could use statfulset and headless service to achieve this goal
Try using the below config yaml to deploy it into the K8s cluster. Or else you should checkout the Official Helm : https://github.com/nsqio/helm-chart
Using headless service you can discover all pod IPs.
apiVersion: v1
kind: Service
metadata:
name: nsqlookupd
labels:
app: nsq
spec:
ports:
- port: 4160
targetPort: 4160
name: tcp
- port: 4161
targetPort: 4161
name: http
publishNotReadyAddresses: true
clusterIP: None
selector:
app: nsq
component: nsqlookupd
---
apiVersion: v1
kind: Service
metadata:
name: nsqd
labels:
app: nsq
spec:
ports:
- port: 4150
targetPort: 4150
name: tcp
- port: 4151
targetPort: 4151
name: http
clusterIP: None
selector:
app: nsq
component: nsqd
---
apiVersion: v1
kind: Service
metadata:
name: nsqadmin
labels:
app: nsq
spec:
ports:
- port: 4170
targetPort: 4170
name: tcp
- port: 4171
targetPort: 4171
name: http
selector:
app: nsq
component: nsqadmin
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nsqlookupd
spec:
serviceName: "nsqlookupd"
replicas: 3
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: nsq
component: nsqlookupd
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nsq
- key: component
operator: In
values:
- nsqlookupd
topologyKey: "kubernetes.io/hostname"
containers:
- name: nsqlookupd
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4160
name: tcp
- containerPort: 4161
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 2
command:
- /nsqlookupd
terminationGracePeriodSeconds: 5
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nsqd
spec:
serviceName: "nsqd"
replicas: 3
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: nsq
component: nsqd
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nsq
- key: component
operator: In
values:
- nsqd
topologyKey: "kubernetes.io/hostname"
containers:
- name: nsqd
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4150
name: tcp
- containerPort: 4151
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 2
volumeMounts:
- name: datadir
mountPath: /data
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd-0.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-1.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-2.nsqlookupd:4160
- -broadcast-address
- $(HOSTNAME).nsqd
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
terminationGracePeriodSeconds: 5
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: ssd
resources:
requests:
storage: 1Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nsqadmin
spec:
replicas: 1
template:
metadata:
labels:
app: nsq
component: nsqadmin
spec:
containers:
- name: nsqadmin
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4170
name: tcp
- containerPort: 4171
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
command:
- /nsqadmin
- -lookupd-http-address
- nsqlookupd-0.nsqlookupd:4161
- -lookupd-http-address
- nsqlookupd-1.nsqlookupd:4161
- -lookupd-http-address
- nsqlookupd-2.nsqlookupd:4161
terminationGracePeriodSeconds: 5
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nsq
spec:
rules:
- host: nsq.example.com
http:
paths:
- path: /
backend:
serviceName: nsqadmin
servicePort: 4171
I have a running Elasticsearch STS with a headless Service assigned to it:
svc.yaml:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch
namespace: elasticsearch-namespace
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
stateful.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: elasticsearch-namespace
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-persistent-storage
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: elasticsearch-storageclass
resources:
requests:
storage: 20Gi
The question is: how to access this STS with PODs of Deployment Kind? Let's say, using this Redis POD:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-ws-app
labels:
app: redis-ws-app
spec:
replicas: 1
selector:
matchLabels:
app: redis-ws-app
template:
metadata:
labels:
app: redis-ws-app
spec:
containers:
- name: redis-ws-app
image: redis:latest
command: [ "redis-server"]
ports:
- containerPort: 6379
I have been trying to create another service, that would enable me to access it from the outside, but without any luck:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-tcp
namespace: elasticsearch-namespace
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
ports:
- protocol: TCP
port: 9200
targetPort: 9200
You would reach it directly hitting the headless service. As an example, this StatefulSet and this Service.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 4
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
kind: Service
apiVersion: v1
metadata:
name: nginx-headless
spec:
selector:
app: nginx
clusterIP: None
ports:
- port: 80
name: http
I could reach the pods of the statefulset, through the headless service from any pod within the cluster:
/ # curl -I nginx-headless
HTTP/1.1 200 OK
Server: nginx/1.19.0
Date: Tue, 09 Jun 2020 12:36:47 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 26 May 2020 15:00:20 GMT
Connection: keep-alive
ETag: "5ecd2f04-264"
Accept-Ranges: bytes
The singularity of the headless service, is that it doesn't create iptable rules for that service. So, when you query that service, it goes to kube-dns (or CoreDNS), and it returns the backends, rather then the IP address is the service itself. So, if you do nslookup, for example, it will return all the backends (pods) of that service:
/ # nslookup nginx-headless
Name: nginx-headless
Address 1: 10.56.1.44
Address 2: 10.56.1.45
Address 3: 10.56.1.46
Address 4: 10.56.1.47
And it won't have any iptable rules assigned to it:
$ sudo iptables-save | grep -i nginx-headless
$
Unlike a normal service, that would return the IP address of the service itself:
/ # nslookup nginx
Name: nginx
Address 1: 10.60.15.30 nginx.default.svc.cluster.local
And it will have iptable rules assigned to it:
$ sudo iptables-save | grep -i nginx
-A KUBE-SERVICES ! -s 10.56.0.0/14 -d 10.60.15.30/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.60.15.30/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
User #suren was right about the headless service. In my case, I was just using a wrong reference.
The Kube-DNS naming convention is
service.namespace.svc.cluster-domain.tld
and the default cluster domain is cluster.local
In my case, the in order to reach the pod, one has to use:
curl -I elasticsearch.elasticsearch-namespace