Elastic APM Error | Google Kubernetes Engine - elasticsearch

Am trying to run Elastic APM in GKE cluster. I have installed elastic-search, kibana and apm-server. All services are up and running. All these components has been through helm charts. Below are the configuration for each component.
apmConfig:
apm-server.yml: |
apm-server:
host: "0.0.0.0:8200"
queue: {}
output.elasticsearch:
hosts: ["http://elasticsearch-master.monitoring.svc.cluster.local:9200"]
username: "elastic"
password: "password"
kibanaConfig:
kibana.yml: |
server.host: 0.0.0.0
server.port: 5601
elasticsearch.hosts: "http://elasticsearch-master.monitoring.svc.cluster.local:9200"
kibana.index: ".kibana"
server.basePath: "/kibana"
server.rewriteBasePath: true
server.publicBaseUrl: "https://mydomain/kibana"
elasticsearch:
username: "kibana_system"
password: "password"
I have tried to add APM integration to one of my service by using the below config:
var apm = require('elastic-apm-node').start({
// Override the service name from package.json
// Allowed characters: a-z, A-Z, 0-9, -, _, and space
serviceName: 'shopping',
// Use if APM Server requires a secret token
secretToken: '',
// Set the custom APM Server URL (default: http://localhost:8200)
serverUrl: 'https://mydomain/apm',
// Set the service environment
environment: 'production'
})
When I start the service, I get the below error in logs:
{"log.level":"error","#timestamp":"2022-08-18T10:08:31.584Z","log":{"logger":"elastic-apm-node"},"ecs":{"version":"1.6.0"},"message":"APM Server transport error (301): Unexpected APM Server response"}
301 - Moved permanently. It would be great , if I could get any help?

The problem have been solved. The reason was because I was using a proxy and hitting port 80, while in the APM server service I was using port as 8200.
After changing the port to 80 and targetPort to 8200 value in apm server service, I was able to correctly instrument Elastic APM with my services.

Related

How to Connect to kafka on localhost (host machine) from app inside kubernetes (minikube)

I am trying to connect my springboot app (running inside minikube) to kafka on my localhost (ie, laptop).
I have tried many things, including headless services, services without selectors, updating minikube \etc\hosts, but nothing works yet.
I get error from spring boot saying No resolvable bootstrap urls given in bootstrap.servers
Can someone please point me to what I am doing wrong?
My Headless Service
apiVersion: v1
kind: Service
metadata:
name: es-local-kafka
namespace: demo
spec:
clusterIP: None
---
apiVersion: v1
kind: Endpoints
metadata:
name: es-local-kafka
subsets:
- addresses:
- ip: "10.0.2.2"
ports:
- name: "kafkabroker1"
port: 9191
- name: "kafkabroker2"
port: 9192
- name: "kafkabroker3"
port: 9193
My application properties for kafka:
kafka.bootstrap-servers=${LOCALHOST}:9191,${LOCALHOST}:9192,${LOCALHOST}:9193
My Config Map:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: rr-config
namespace: demo
data:
LOCALHOST: es-local-kafka.demo.svc
Not sure how you are trying to connect service running on Minikube or on the local system and want to leverage kafka on minikube.
If your application running on local system and Kafka on minikube
you can connect the application to Kafka cluster with the IP of minikube also.
Here is good example : https://github.com/d1egoaz/minikube-kafka-cluster
Git clone : https://github.com/d1egoaz/minikube-kafka-cluster
cd minikube-kafka-cluster
kubectl apply -f 00-namespace/
kubectl apply -f 01-zookeeper/
kubectl apply -f 02-kafka/
kubectl apply -f 03-yahoo-kafka-manager/
kubectl get svc -n kafka-ca1 (Note the port of kafka 31445)
list the Ip of minikube
minikube ip
Now from your local system to minikube kafka you can connect with, http://minikube-ip:port you will see UI of kafka manager in browser
If you are running sprint boot application on the minikube
If both services are running in same namespace you just have to use the service name only to connect
Only service name in sprint boot, if port required you can also pass it
es-local-kafka
try with passing full service also
<servicename>.<namespace>.svc.cluster.local
Headless service is for different purposes and service without a selector is weird in that case your service wont be able to connect to PODs.
I eventually got a fix, and doesn't need all the crazy stuff I was referring to in my question:
You need to make sure your kafka broker is bound to 0.0.0.0 instead of 127.0.0.0 (localhost) . By default, in the single node kafka broker setup, this is what is used. I went with this, due to both time constraint, and the fact that this was just for a POC in my local (prod will have a specific dns-able kafka URL anyway, and no such localhost shenanigans needed)
In the kafka URL in your application properties file, instead of localhost, you need to give ip as as the minikube ip. This is the same ip that you will get if you do the command minikube ip :)
Read more about how this works here: https://minikube.sigs.k8s.io/docs/handbook/host-access/

Received plaintext http traffic on an https channel, closing connection

I have deployed ECK (using helm) on my k8s cluster and i am attempting to install elasticsearch following the docs. https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
I have externally exposed service/elasticsearch-prod-es-http so that i can connect to it from outside of my k8s cluster. However as you can see when i try to connect to it either from curl or the browser i receive an error "502 Bad Gateway" error.
curl elasticsearch.dev.acme.com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>
Upon checking the pod (elasticsearch-prod-es-default-0) i can see the following message repeated.
{"type": "server", "timestamp": "2021-04-27T13:12:20,048Z", "level": "WARN", "component": "o.e.x.s.t.n.SecurityNetty4HttpServerTransport", "cluster.name": "elasticsearch-prod", "node.name": "elasticsearch-prod-es-default-0", "message": "received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.5.81:9200, remoteAddress=/10.0.3.50:46380}", "cluster.uuid": "t0mRfv7kREGQhXW9DVM3Vw", "node.id": "nCyAItDmSqGZRa3lApsC6g" }
Can you help me understand why this is occuring and how to fix it?
I suspect it has something to do with my TLS configuration because when i disable TLS, im able to connect to it externally without issues. However in a production environment i think keeping TLS enabled is important?
FYI i am able to port-forward the service and connect to it with curl using the -k flag.
What i have tried
I have tried adding my domain to the section as described here https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-http-settings-tls-sans.html#k8s-elasticsearch-http-service-san
I have tried using openssl to generate a self signed certificate but that did not work. Trying to connect locally returns the following error message.
curl -u "elastic:$PASSWORD" "https://localhost:9200"
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I have tried generating a certificate using the tool https://www.elastic.co/guide/en/elasticsearch/reference/7.9/configuring-tls.html#tls-transport
bin/elasticsearch-certutil ca
bin/elasticsearch-certutil cert --ca elastic-stack-ca.12 --pem
Then using the .crt and .key generated i created a kubectl secret elastic-tls-cert. But again curling localhost without -k gave the following error:
curl --cacert cacert.pem -u "elastic:$PASSWORD" -XGET "https://localhost:9200"
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
elasticsearch.yml
# This sample sets up an Elasticsearch cluster with 3 nodes.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch-prod
namespace: elastic-system
spec:
version: 7.12.0
nodeSets:
- name: default
config:
# most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_value
node.roles: ["master", "data", "ingest", "ml"]
# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
xpack.security.enabled: true
podTemplate:
metadata:
labels:
# additional labels for pods
foo: bar
spec:
nodeSelector:
acme/node-type: ops
# this changes the kernel setting on the node to allow ES to use mmap
# if you uncomment this init container you will likely also want to remove the
# "node.store.allow_mmap: false" setting above
# initContainers:
# - name: sysctl
# securityContext:
# privileged: true
# command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
###
# uncomment the line below if you are using a service mesh such as linkerd2 that uses service account tokens for pod identification.
# automountServiceAccountToken: true
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
limits:
memory: 4Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
count: 3
# # request 2Gi of persistent data storage for pods in this topology element
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
storageClassName: elasticsearch
# # inject secure settings into Elasticsearch nodes from k8s secrets references
# secureSettings:
# - secretName: ref-to-secret
# - secretName: another-ref-to-secret
# # expose only a subset of the secret keys (optional)
# entries:
# - key: value1
# path: newkey # project a key to a specific path (optional)
http:
service:
spec:
# expose this cluster Service with a LoadBalancer
type: NodePort
# tls:
# selfSignedCertificate:
# add a list of SANs into the self-signed HTTP certificate
subjectAltNames:
# - ip: 192.168.1.2
# - ip: 192.168.1.3
# - dns: elasticsearch.dev.acme.com
# - dns: localhost
# certificate:
# # provide your own certificate
# secretName: elastic-tls-cert
kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.6-eks-49a6c0", GitCommit:"49a6c0bf091506e7bafcdb1b142351b69363355a", GitTreeState:"clean", BuildDate:"2020-12-23T22:10:21Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
elastic-operator elastic-system 1 2021-04-26 11:18:02.286692269 +0100 BST deployed eck-operator-1.5.0 1.5.0
resources
pod/elastic-operator-0 1/1 Running 0 4h58m 10.0.5.142 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-0 1/1 Running 0 9m5s 10.0.5.81 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-1 1/1 Running 0 9m5s 10.0.1.128 ip-10-0-1-207.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-2 1/1 Running 0 9m5s 10.0.5.60 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/elastic-operator-webhook ClusterIP 172.20.218.208 <none> 443/TCP 26h app.kubernetes.io/instance=elastic-operator,app.kubernetes.io/name=elastic-operator
service/elasticsearch-prod-es-default ClusterIP None <none> 9200/TCP 9m5s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod,elasticsearch.k8s.elastic.co/statefulset-name=elasticsearch-prod-es-default
service/elasticsearch-prod-es-http NodePort 172.20.229.173 <none> 9200:30604/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod
service/elasticsearch-prod-es-transport ClusterIP None <none> 9300/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod
aws alb ingress controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: elastic-ingress
namespace: elastic-system
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: "<redacted>"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/certificate-arn: <redacted>
alb.ingress.kubernetes.io/tags: Environment=prod,Team=dev
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '300'
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=acme-aws-ingress-logs,access_logs.s3.prefix=dev-ingress
spec:
rules:
- host: elasticsearch.dev.acme.com
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: elasticsearch-prod-es-http
port:
number: 9200
# - host: kibana.dev.acme.com
# http:
# paths:
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: kibana-prod-kb-http
# port:
# number: 5601
If anyone comes across this problem in the future, make sure your ingress is properly configured. The error message suggests that its a misconfiguration with the ingress.
received plaintext http traffic on an https channel, closing connection
In my case i am using aws-load-balancer-controller. I had to attach a annotation to my ingress that forces the connection to be HTTPS rather than HTTP.
alb.ingress.kubernetes.io/backend-protocol: "HTTPS"
For my case this problem was fixed by setting the above annotation to my ingress file and it has nothing to do with setting up a custom/private TLS certificate.
solution for me: http => https
You have to disable http ssl, for this you have to modify the config/elasticsearch.yml file and change the associated variable to false:
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12

ReadOnly Rest plugin giving Authentication Exception

I am using readonlyRest plugin to secure elastic and kibana but once I added the following in my readonlyrest.yml, the kibana starts giving me "Authentication Exception", what could be the reason for that?
kibana.yml
elasticsearch.username: "kibana"
elasticsearch.password: "kibana123"
readonlyrest.yml
readonlyrest:
enable: true
response_if_req_forbidden: Access denied!!!
access_control_rules:
- name: "Accept all requests from localhost"
type: allow
hosts: [XXX.XX.XXX.XXX]
- name: "::Kibana server::"
auth_key: kibana:kibana123
type: allow
- name: "::Kibana user::"
auth_key: kibana:kibana123
type: allow
kibana_access: rw
indices: [".kibana*","log-*"]
My kibana and elastic are hosted on same server, is that the reason?
Another question: If I want to make my elastic server accessible only through a particular host then can I write that host in the first section of access_control_rules as mentioned in readonlyrest.yml?
Elastic version: 6.2.3
Log error: I didn't remember exactly but it was [ACL] Forbidden and showing false in all the three control rules.

Connect kibana to elasticsearch in kubernetes cluster

I have a running elasticsearch cluster and I am trying to connect kibana to this cluster (same node). Currently the page hangs when I try to open the service in my browser using :. . In my kibana pod logs, the last few log messages in the pod are:
{"type":"log","#timestamp":"2017-10-13T17:23:46Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","#timestamp":"2017-10-13T17:23:46Z","tags":["status","ui settings","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2017-10-13T17:23:49Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
My kibana.yml file that is mounted into the kibana pod has the following config:
server.name: kibana-logging
server.host: 0.0.0.0
elasticsearch.url: http://elasticsearch:9300
xpack.security.enabled: false
xpack.monitoring.ui.container.elasticsearch.enabled: true
and my elasticsearch.yml file has the following config settings (I have 3 es pods)
cluster.name: elasticsearch-logs
node.name: ${HOSTNAME}
network.host: 0.0.0.0
bootstrap.memory_lock: false
xpack.security.enabled: false
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["172.17.0.3:9300", "172.17.0.4:9300", "172.17.0.4:9300"]
I feel like the issue is currently with the network.host field but I'm not sure. What fields am I missing/do I need to modify in order to connect to a kibana pod to elasticsearch if they are in the same cluster/node? Thanks!
ES Service:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
component: elasticsearch
role: master
spec:
type: NodePort
selector:
component: elasticsearch
role: master
ports:
- name: http
port: 9200
targetPort: 9200
nodePort: 30303
protocol: TCP
Kibana Svc
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: default
labels:
component: kibana
spec:
type: NodePort
selector:
component: kibana
ports:
- port: 80
targetPort: 5601
protocol: TCP
EDIT:
After changing port to 9200 in kibana.yml here is what i see in the logs at the end when I try and access kibana:
{"type":"log","#timestamp":"2017-10-13T21:36:30Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","#timestamp":"2017-10-13T21:36:30Z","tags":["status","ui settings","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2017-10-13T21:36:33Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2017-10-13T21:37:02Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.reporting-*/esqueue/_search?version=true => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2017-10-13T21:37:32Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:33Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:38Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:42Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
The issue here is that you exposed Elasticsearch on port 9200 but are trying to connect to port 9300 in your kibana.yml file.
You either need to edit your kibana.yml file to use:
elasticsearch.url: http://elasticsearch:9200
Or change the port in the elasticsearch service to 9300.

Spring Cloud Consul health check configuration

I'm running a Spring Boot application as a Docker container. This works fine so far, but it's giving me some head aches when trying to use Spring Cloud Consul as well. It reads the configuration from the Consul KVS just fine, but the health checks seem to be acting up.
The default health check uses the hostname of the docker container, for example http://users-microservice/health. Obviously this won't resolve when accessed from Consul.
No problem, the documentation mentions that you can use healthCheckPath in your bootstrap.yml file to configure it. This is what I have now:
spring:
application:
name: users-microservice
cloud:
consul:
host: myserver.com
port: 8500
config:
prefix: API-CONFIG
profileSeparator: '__'
discovery:
tags: users-microservice
healthCheckPath: http://myserver.com:${server.port}/status
healthCheckInterval: 30s
Unfortunately, this variable seems to be used in a very different manner from what I expected. This is what Consul is trying to reach:
Get http://users:18090http//myserver.com:18090/status: dial tcp: unknown port tcp/18090http
How can I fix this? Is there some undocumented configuration parameter that I should set?
Use spring.cloud.consul.discovery.healthCheckUrl=http://myserver.com:${server.port}/status
healthCheckPath only changes the path, not host and port.

Resources