I have deployed Logstash and elasticsearch pod on EKS cluster. When I am checking the logs for logstash pod it is showing unreachable elasticserach server. Though my elasticsearch is up and running. Please find below yaml files and log error.
configMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: "logstash-configmap-development"
namespace: "development"
labels:
app: "logstash-development"
data:
logstash.conf: |-
input {
http {
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["https://my-server.com/elasticsearch-development/"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "logstash-development"
namespace: "development"
spec:
selector:
matchLabels:
app: "logstash-development"
replicas: 1
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "logstash-development"
spec:
containers:
-
name: "logstash-development"
image: "logstash:7.10.2"
imagePullPolicy: "Always"
env:
-
name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS"
value: "https://my-server.com/elasticsearch-development/"
-
name: "XPACK_MONITORING_ELASTICSEARCH_URL"
value: "https://my-server.com/elasticsearch-development/"
-
name: "SERVER_BASEPATH"
value: "logstash-development"
securityContext:
privileged: true
ports:
-
containerPort: 8080
protocol: TCP
volumeMounts:
-
name: "logstash-conf-volume"
mountPath: "/usr/share/logstash/pipeline/"
volumes:
-
name: "logstash-conf-volume"
configMap:
name: "logstash-configmap-development"
items:
- key: "logstash.conf"
path: "logstash.conf"
imagePullSecrets:
-
name: "logstash"
service.yaml
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "logstash-development"
namespace: "development"
labels:
app: "logstash-development"
spec:
ports:
-
port: 55770
targetPort: 8080
selector:
app: "logstash-development"
Logstash pod log error
[2021-06-09T08:22:38,708][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://my-server.com/elasticsearch-development/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://my-server.com/elasticsearch-development/][Manticore::ConnectTimeout] connect timed out"}
Note:- Elasticsearch is up and running. And when I hit the logstash url it is giving as status ok.
I have checked with elasticsearch cluster-ip, their logstash is able to connect with Elasticsearch, but when I am giving ingress path url it is not able to connect to elasticsearch.
Also from the logs, I noticed it is taking incorrect url for elasticsearch.
My elasticsearch url is something like this:- https://my-server.com/elasticserach
but instead logstash is looking for https://my-server.com:9200/elasticsearch
With this url (https://my-server.com:9200/elasticsearch) elasticsearch is not accessible as a result it is giving connection timeout.
Can someone tell why it is taking (https://my-server.com:9200/elasticsearch) and not (https://my-server.com/elasticsearch)
I am now able to connect logstash with elasticsearch, if you are using elasticsearch with dns name, logstash by default will take the port of elasticsearch as 9200, so in my case it was taking the elasticsearch url as https://my-server.com:9200/elasticsearch-development/. But with that url, elasticsearch was not accessible, it was only accessible with (https://myserver.com/elasticsearch-development/). So I need to add the https port i.e 443 in my elasticsearch url, through which the logstash would be able to connect to elasticserach (https://my-server.com:443/elasticsearch-development/)
Long story short:-
In deployment.yaml file under env variable XPACK_MONITORING_ELASTICSEARCH_HOSTS and XPACK_MONITORING_ELASTICSEARCH_URL given the value as https://my-server.com:443/elasticsearch-development/
Same value was given in logstash.conf file.
Related
I deplyed a nginx pod as deployment kind in k8s.
Now I want to deploy filebeat and logstash in the same cluster to get nginx logs.
Here are my manifest files.
nginx.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logs
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: logs
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: logs
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: http
selector:
app: nginx
filebeat.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logs
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logs
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
host: ${NODE_NAME}
hints.enabled: true
templates:
- condition.contains:
kubernetes.namespace: logs
config:
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"]
subPath: access.log
tags: ["access"]
error:
enabled: true
var.paths: ["/var/log/nginx/error.log*"]
subPath: error.log
tags: ["error"]
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.logstash:
hosts: ["logstash:5044"]
loadbalance: true
index: filebeat
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logs
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.10.0
args: [
"-c", "/usr/share/filebeat/filebeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
subPath: filebeat.yml
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
logstash.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logs
---
apiVersion: v1
kind: Service
metadata:
namespace: logs
labels:
app: logstash
name: logstash
spec:
ports:
- name: "25826"
port: 25826
targetPort: 25826
- name: "5044"
port: 5044
targetPort: 5044
selector:
app: logstash
status:
loadBalancer: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: logs
name: logstash-configmap
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
host => "0.0.0.0"
}
}
filter {
if [fileset][module] == "nginx" {
if [fileset][name] == "access" {
grok {
match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
remove_field => "message"
}
mutate {
add_field => { "read_timestamp" => "%{#timestamp}" }
}
date {
match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
remove_field => "[nginx][access][time]"
}
useragent {
source => "[nginx][access][agent]"
target => "[nginx][access][user_agent]"
remove_field => "[nginx][access][agent]"
}
geoip {
source => "[nginx][access][remote_ip]"
target => "[nginx][access][geoip]"
}
}
else if [fileset][name] == "error" {
grok {
match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
remove_field => "message"
}
mutate {
rename => { "#timestamp" => "read_timestamp" }
}
date {
match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
remove_field => "[nginx][error][time]"
}
}
}
}
output {
stdout { codec => rubydebug }
}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: logstash-nginx-to-gcs
namespace: logs
spec:
serviceName: "logstash"
selector:
matchLabels:
app: logstash
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: logstash
spec:
terminationGracePeriodSeconds: 10
volumes:
- name: logstash-service-account-credentials
secret:
secretName: logstash-credentials
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.10.0
volumeMounts:
- name: logstash-service-account-credentials
mountPath: /secrets/logstash
readOnly: true
resources:
limits:
memory: 2Gi
volumeClaimTemplates:
- metadata:
name: logstash-nginx-to-gcs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Ki
I deployed them. But I'm not sure how filebeat can fetch the nginx log in a pod. It's a DaemonSet kind.
When I check logstash's logs
kubectl logs -f logstash-nginx-to-gcs-0 -n logs
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby18310714590719622705jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2020-12-04T09:37:17,563][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.10.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
[2020-12-04T09:37:17,659][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2020-12-04T09:37:17,712][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2020-12-04T09:37:18,748][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"8b873949-cf90-491a-b76a-e3e7caa7f593", :path=>"/usr/share/logstash/data/uuid"}
[2020-12-04T09:37:20,050][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2020-12-04T09:37:20,060][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
Please configure Metricbeat to monitor Logstash. Documentation can be found at:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
[2020-12-04T09:37:21,056][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2020-12-04T09:37:22,082][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2020-12-04T09:37:22,572][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2020-12-04T09:37:22,705][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2020-12-04T09:37:22,791][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
[2020-12-04T09:37:22,921][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
[2020-12-04T09:37:25,467][INFO ][org.reflections.Reflections] Reflections took 262 ms to scan 1 urls, producing 23 keys and 47 values
[2020-12-04T09:37:26,613][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x726c3bee run>"}
[2020-12-04T09:37:28,365][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.74}
[2020-12-04T09:37:28,464][INFO ][logstash.inputs.beats ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2020-12-04T09:37:28,525][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-12-04T09:37:28,819][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-12-04T09:37:29,036][INFO ][org.logstash.beats.Server][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] Starting server on port: 5044
[2020-12-04T09:37:29,732][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-12-04T09:37:52,828][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2020-12-04T09:37:53,109][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
But I don't want to connect to elasticsearch now. Just test get data.
ive followed this guide https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html, then apply this manifest:
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 7.5.1
nodeSets:
- name: default
count: 3
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
spec:
version: 7.5.1
count: 1
elasticsearchRef:
name: elasticsearch
---
apiVersion: apm.k8s.elastic.co/v1
kind: ApmServer
metadata:
name: apm-server
spec:
version: 7.5.1
count: 1
elasticsearchRef:
name: "elasticsearch"
config:
apm-server:
rum.enabled: true
ilm.enabled: true
rum.event_rate.limit: 300
rum.event_rate.lru_size: 1000
rum.allow_origins: ['']
rum.library_pattern: "node_modules|bower_components|~"
rum.exclude_from_grouping: "^/webpack"
rum.source_mapping.enabled: true
rum.source_mapping.cache.expiration: 5m
rum.source_mapping.index_pattern: "apm--sourcemap*"
http:
service:
spec:
type: LoadBalancer
tls:
selfSignedCertificate:
disabled: true
Then with port-forward kubectl port-forward pod/kibana-kb-5bb5bf69c9-5m5r5 5601 im trying to login to kibana but i cannot find any password for elastic search or kibana and look if the APM is working correctly... So, how do i get the password to access it? which secret is it ?
kubectl get secret $ELASTICSEARCH_NAME-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
as described here did not work for you? Does that secret exist?
I am using spring-cloud-starter-kubernetes-all dependency for reading config map from my spring boot microservices and its working fine.
After modifying the config map i am using refresh endpoint
minikube servie list # to get the servive url
curl http://192.168.99.100:30824/actuator/refresh -d {} -H "Content-Type: application/json"
it working as expected and application loads configmap changes.
Issue
The above working fine if i have only 1 pod of my application but when i do use more that 1 pods only 1 pods picks the changes not all.
In below example only i pod picks the changes
[message-producer-5dc4b8b456-tbbjn message-producer] Say Hello to the World12431
[message-producer-5dc4b8b456-qzmgb message-producer] Say Hello to the World
minkube deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: message-producer
labels:
app: message-producer
spec:
replicas: 2
selector:
matchLabels:
app: message-producer
template:
metadata:
labels:
app: message-producer
spec:
containers:
- name: message-producer
image: sandeepbhardwaj/message-producer
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: message-producer
spec:
selector:
app: message-producer
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
configmap.yml
kind: ConfigMap
apiVersion: v1
metadata:
name: message-producer
data:
application.yml: |-
message: Say Hello to the World
bootstrap.yml
spring:
cloud:
kubernetes:
config:
enabled: true
name: message-producer
namespace: default
reload:
enabled: true
mode: EVENT
strategy: REFRESH
period: 3000
configuration
#ConfigurationProperties(prefix = "")
#Configuration
#Getter
#Setter
public class MessageConfiguration {
private String message = "Default message";
}
rbac
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default # "namespace" can be omitted since ClusterRoles are not namespaced
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
metadata:
name: service-reader
subjects:
- kind: User
name: default # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: service-reader
apiGroup: rbac.authorization.k8s.io
This is happening because when you hit curl http://192.168.99.100:30824/actuator/refresh -d {} -H "Content-Type: application/json" kubernetes will send that request to one of the pods behind the service via round robin load balancing.
You should use the property source reload feature by setting spring.cloud.kubernetes.reload.enabled=true. This will reload the property whenever there is a change in the config map hence you don't need to use the refresh endpoint.
I am using a single node elasticsearch server and a Java application based on elasticsearch high level rest client. Both are running in a Kubernetes cluster.
#Bean(destroyMethod = "close")
public RestHighLevelClient client(){
RestHighLevelClient client = null;
Logger.getLogger(getClass().getName()).info("Connecting to elasticsearch on host : " + host);
client = new RestHighLevelClient(RestClient.builder(new HttpHost(host, port, "http")));
return client;
}
This is working fine until service kept idle for about 10 minutes. When trying to query elasticsearch server an exception is thrown form java service
java.io.IOException: Connection reset
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:948) ~[elasticsearch-rest-client-6.4.3.jar!/:7.2.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:227) ~[elasticsearch-rest-client-6.4.3.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1448) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1418) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1388) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:930) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
When I send the requests three time to the service it will again works. But after about 10 minutes of idle time service will give the same exception. I have a docker-compose setup with same images but there is no issue like this.
My elasticsearch deployment
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
spec:
type: NodePort
ports:
- name: client
port: 9200
targetPort: 9200
- name: nodes
port: 9300
targetPort: 9300
selector:
app: elasticsearch
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
spec:
serviceName: elasticsearch
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
name: elasticsearch
env:
- name: cluster.name
value: "docker-cluster"
- name: 'ES_JAVA_OPTS'
value: "-Xms512m -Xmx512m"
- name: discovery.type
value: "single-node"
ports:
- containerPort: 9200
- containerPort: 9300
name: mysql
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
volumes:
- name: elasticsearch-persistent-storage
persistentVolumeClaim:
claimName: elasticsearch-claim
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-init
securityContext:
privileged: true
My Java Service
apiVersion: v1
kind: Service
metadata:
name: search
spec:
ports:
- port: 9099
targetPort: 9099
selector:
app: search
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: search
spec:
selector:
matchLabels:
app: search
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: search
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- image: search-service:0.0.1-SNAPSHOT
name: search
env:
- name: ELASTIC_SEARCH_HOST
value: elasticsearch
- name: ELASTIC_SEARCH_PORT
value: "9200"
- name: ELASTIC_SEARCH_CLUSTER
value: docker-cluster
ports:
- containerPort: 9099
I built a simple operator, by tweaking the memcached example. The only major difference is that I need two docker images in my pods. Got the deployment running. My test.yaml used to deploy with kubectl.
apiVersion: "cache.example.com/v1alpha1"
kind: "Memcached"
metadata:
name: "solar-demo"
spec:
size: 3
group: cache.example.com
names:
kind: Memcached
listKind: MemcachedList
plural: solar-demos
singular: solar-demo
scope: Namespaced
version: v1alpha1
I am still missing one piece though - load-balancing part. Currently, under Docker we are using the nginx image working as a reverse-proxy configured as:
upstream api_microservice {
server api:3000;
}
upstream solar-svc_microservice {
server solar-svc:3001;
}
server {
listen $NGINX_PORT default;
location /city {
proxy_pass http://api_microservice;
}
location /solar {
proxy_pass http://solar-svc_microservice;
}
root /html;
location / {
try_files /$uri /$uri/index.html /$uri.html /index.html=404;
}
}
I want my cluster to expose the port 8080 and forward to ports 3000 and 3001 to my images running inside Pods.
My deployment:
dep := &appsv1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1",
Kind: "Deployment",
},
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
},
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: ls,
},
Template: v1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: ls,
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Image: "shmukler/docker_solar-svc",
Name: "solar-svc",
Command: []string{"npm", "run", "start-solar-svc"},
Ports: []v1.ContainerPort{{
ContainerPort: 3001,
Name: "solar-svc",
}},
},
{
Image: "shmukler/docker_solar-api",
Name: "api",
Command: []string{"npm", "run", "start-api"},
Ports: []v1.ContainerPort{{
ContainerPort: 3000,
Name: "solar-api",
}},
},
},
},
},
}
What do I need to add have ingress or something running in front of my pods?
Thank you
What do I need to add have ingress or something running in front of my pods?
Yes, Ingress is designed for that kind of tasks.
Ingress has a path-based routing, which will be able to set up the same configuration as you mentioned in your example with Nginx. Moreover, one of the most popular implementations of Ingress is Nginx as a proxy.
Ingress is basically a set of rules that allows traffic, otherwise dropped or forwarded elsewhere, to reach the cluster services.
Here is an example of an Ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
spec:
rules:
- host: '' # Empty value means ‘any host’
http:
paths:
- path: /city
backend:
serviceName: myapp
servicePort: 3000
- path: /solar
backend:
serviceName: myapp
servicePort: 3001
Also, because a Pod is not a static thing, you should create a Service object which will be a static entry point of your application for Ingress.
Here is an example of the Service:
kind: Service
apiVersion: v1
metadata:
name: myapp
spec:
selector:
app: "NAME_OF_YOUR_DEPLOYMENT"
ports:
- name: city
protocol: TCP
port: 3000
targetPort: 3000
- name: solar
protocol: TCP
port: 3001
targetPort: 3001