Monitoring Multiple tomcat instance on one linux server with Metricbeat and Jolokia - elasticsearch

How to configure below using the multiple agent port below, with metricbeat and jolokia to monitor multiple tomcat metric on One Linux Server...Would it be fine to configure the way below tomcat metricbeat events.module, can I get accurate separate tomcat metric in ELK?
cat /etc/metricbeat/modules.d/tomcat.yml
-module: tomcat
metricsets: ['threading', 'cache', 'memory', 'requests']
period: 10s
hosts: ['localhost:8778','localhost:7777']
path: "/jolokia/?ignoreErrors=true&canonicalNaming=false"

Related

Prometheus target showing down

Please note: my prometheus is running using ubuntu terminal and my springboot application is running on windows. Seems like my ubuntu is not able to connect with the localhost of windows.
I have created springboot metrics using "actuator" and my metrics are being exposed at "http/localhost:8080/actuator/prometheus".
My application.yml configuration in my springboot application looks like this:
management:
endpoints:
web:
exposure:
include: prometheus
The configuration file of prometheus i.e. prometheus.yml is as below:
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from
this config.
- job_name: "services"
static_configs:
- targets: ["localhost:8080"]
metrics_path: '/actuator/prometheus'
Despite this configuration, i see "target" as down in prometheus interface. It says Get "http://localhost:8080/actuator/prometheus": dial tcp 127.0.0.1:8080: connect: connection refused . Why is prometheus not able to pick the metrics at localhost?

How to Connect to kafka on localhost (host machine) from app inside kubernetes (minikube)

I am trying to connect my springboot app (running inside minikube) to kafka on my localhost (ie, laptop).
I have tried many things, including headless services, services without selectors, updating minikube \etc\hosts, but nothing works yet.
I get error from spring boot saying No resolvable bootstrap urls given in bootstrap.servers
Can someone please point me to what I am doing wrong?
My Headless Service
apiVersion: v1
kind: Service
metadata:
name: es-local-kafka
namespace: demo
spec:
clusterIP: None
---
apiVersion: v1
kind: Endpoints
metadata:
name: es-local-kafka
subsets:
- addresses:
- ip: "10.0.2.2"
ports:
- name: "kafkabroker1"
port: 9191
- name: "kafkabroker2"
port: 9192
- name: "kafkabroker3"
port: 9193
My application properties for kafka:
kafka.bootstrap-servers=${LOCALHOST}:9191,${LOCALHOST}:9192,${LOCALHOST}:9193
My Config Map:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: rr-config
namespace: demo
data:
LOCALHOST: es-local-kafka.demo.svc
Not sure how you are trying to connect service running on Minikube or on the local system and want to leverage kafka on minikube.
If your application running on local system and Kafka on minikube
you can connect the application to Kafka cluster with the IP of minikube also.
Here is good example : https://github.com/d1egoaz/minikube-kafka-cluster
Git clone : https://github.com/d1egoaz/minikube-kafka-cluster
cd minikube-kafka-cluster
kubectl apply -f 00-namespace/
kubectl apply -f 01-zookeeper/
kubectl apply -f 02-kafka/
kubectl apply -f 03-yahoo-kafka-manager/
kubectl get svc -n kafka-ca1 (Note the port of kafka 31445)
list the Ip of minikube
minikube ip
Now from your local system to minikube kafka you can connect with, http://minikube-ip:port you will see UI of kafka manager in browser
If you are running sprint boot application on the minikube
If both services are running in same namespace you just have to use the service name only to connect
Only service name in sprint boot, if port required you can also pass it
es-local-kafka
try with passing full service also
<servicename>.<namespace>.svc.cluster.local
Headless service is for different purposes and service without a selector is weird in that case your service wont be able to connect to PODs.
I eventually got a fix, and doesn't need all the crazy stuff I was referring to in my question:
You need to make sure your kafka broker is bound to 0.0.0.0 instead of 127.0.0.0 (localhost) . By default, in the single node kafka broker setup, this is what is used. I went with this, due to both time constraint, and the fact that this was just for a POC in my local (prod will have a specific dns-able kafka URL anyway, and no such localhost shenanigans needed)
In the kafka URL in your application properties file, instead of localhost, you need to give ip as as the minikube ip. This is the same ip that you will get if you do the command minikube ip :)
Read more about how this works here: https://minikube.sigs.k8s.io/docs/handbook/host-access/

Automated Setup of Kibana and Elasticsearch with Filebeat Module in Elastic Cloud for Kubernetes (ECK)

I'm trying out the K8s Operator (a.k.a. ECK) and so far, so good.
However, I'm wondering what the right pattern is for, say, configuring Kibana and Elasticsearch with the Apache module.
I know I can do it ad hoc with:
filebeat setup --modules apache2 --strict.perms=false \
--dashboards --pipelines --template \
-E setup.kibana.host="${KIBANA_URL}"
But what's the automated way to do it? I see some docs for the Kibana dashboard portion of it but what about the rest (pipelines, etc.)?
Note: At some point, I may end up actually running a beat for the K8s cluster, but I'm not at that stage yet. At the moment, I just want to set Elasticsearch/Kibana up with the Apache module additions so that external Apache services' Filebeats can get ingested/displayed properly.
FYI, I'm on version 6.8 of the Elastic stack for now.
you can try auto-discovery using label based approach.
config:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.default_config.enabled: "false"
templates:
- condition.contains:
kubernetes.labels.app: "apache"
config:
- module: apache
access:
enabled: true
var.paths: ["/path/to/log/apache/access.log*"]
error:
enabled: true
var.paths: ["/path/to/log/apache/error.log*"]

Clean deploy of Spring boot microservices with Config Server

We had configured a kubernetes cluster where we deploy various services using spring boot and we have one service that is Spring Cloud Config Server.
Our trouble is that when we start the cluster all the services try to connect to the config server to download the configuration, and since the Config Server has not yet started all the services fail, causing kubernetes to retry the initialization and consuming many resources so that config server it self can not start.
We are wondering if there is a way to initialize all services in such a way that do not over load the cluster or so that they pacefully wait until the config server starts. As of now, all services start and we have to wait for like 20 minutes until the cluster works its way out.
Thanks in advance
You can use Init Containers to ping for the server until it is online. An example would be:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector:
matchLabels:
app: web
replicas: 1
template:
metadata:
labels:
app: web
spec:
initContainers:
- name: wait-config-server
image: busybox
command: ["sh", "-c", "for i in $(seq 1 300); do nc -zvw1 config-server 8080 && exit 0 || sleep 3; done; exit 1"]
containers:
- name: web
image: my-mage
ports:
- containerPort: 80
...
In this example I am using an nc command for pinging the server but you can also use wget, curl or whatever is suited best for you.
Their are various options to do the same. Choose the one that best suits you:
You can as well try to apply liveliness probe or readiness probe to
the config server. In this manner, all containers can wait till the
config server is up and running and then try to connect with the
config server.
You can use consul service running as a quorum of 3 or 5 services,
and design the clients to connect to the consul and wait till the
config server is up and running.
You can write a startup script will will trigger the connection
establishment with the config server and post which it can start the
containers.

Prometheus: Address of discovered service is empty?

I've written a very basic Spring Boot 2 application that connects to Zookeeper for service discovery (by using spring-cloud-starter-zookeeper-discovery).
The application gets registered at /services/example-service with the following value:
{"name":"example-service","id":"cb14ad15-4d33-4f1c-a420-29980ddf2fa8","address":"bf3fb9191373","port":8080,"sslPort":null,"payload":{"#class":"org.springframework.cloud.zookeeper.discovery.ZookeeperInstance","id":"application-1","name":"example-service","metadata":{}},"registrationTimeUTC":1524120820273,"serviceType":"DYNAMIC","uriSpec":{"parts":[{"value":"scheme","variable":true},{"value":"://","variable":false},{"value":"address","variable":true},{"value":":","variable":false},{"value":"port","variable":true}]}}
The address is an id because I've deployed the stack with Docker.
My Prometheus configuration looks like this:
- job_name: 'example-service'
metrics_path: '/actuator/prometheus'
serverset_sd_configs:
- servers:
- zookeeper:2181
paths:
- '/services/example-service'
The service discovery page of Prometheus shows the following discovered labels:
__address__=":0" __meta_serverset_endpoint_host="" __meta_serverset_endpoint_port="0" __meta_serverset_path="/services/example-service/cb14ad15-4d33-4f1c-a420-29980ddf2fa8" __meta_serverset_shard="0" __meta_serverset_status="" __metrics_path__="/actuator/prometheus" __scheme__="http" job="example-service"
Any idea why __address__ is :0?
Serverset discovery is a particular way of using Zookeeper, which your application is not following. In this case you probably want file service discovery.
Serverset use config as below:
{"serviceEndpoint":{"host":"localhost","port":9100},"additionalEndpoints":{},"status":"ALIVE"}

Resources