kafka and JMX-exporter - bash

I am unable to use JMX exporter to expose kafka metrics. Can you look at my steps and correct me where needed.?
I am following steps here to enable kafka with JMX exporter.
Following are step by step instruction I followed
#get kafka
wget kafka_2.11-2.0.0
# Download Prometheus JMX exporter:
sudo wget -P /opt/kafka/prometheus/ https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.0/jmx_prometheus_javaagent-0.3.0.jar
sudo wget -P /opt/kafka/prometheus/ https://raw.githubusercontent.com/prometheus/jmx_exporter/master/example_configs/kafka-0-8-2.yml
#Edit Prometheus JMX exporter config file; and append following lines
echo “- pattern : kafka.producer<type=producer-metrics, client-id=(.+)><>(.+):\w* name: kafka_producer_$2” >> /opt/kafka/prometheus/kafka-0-8-2.yml
echo “— pattern : kafka.consumer<type=consumer-metrics, client-id=(.+)><>(.+):\w* name: kafka_consumer_$2” >> /opt/kafka/prometheus/kafka-0-8-2.yml
echo “— pattern : kafka.consumer<type=consumer-fetch-manager-metrics, client-id=(.+)><>(.+):\w* name: kafka_consumer_$2” >> /opt/kafka/prometheus/kafka-0-8-2.yml
#start zookeeper in terminal 0
/opt/kafka/bin/zookeeper-server-start.sh config/zookeeper.properties
#start kafka broker in terminal 1
KAFKA_HEAP_OPTS=’”-Xmx1000M -Xms1000M”’
KAFKA_OPTS=”-javaagent:/opt/kafka/prometheus/jmx_prometheus_javaagent-0.3.0.jar=7071:/opt/kafka/prometheus/kafka-0–8–2.yml”
JMX_PORT=7071
/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
#start kafka consumer in terminal 2
KAFKA_OPTS=”-javaagent:/opt/kafka/prometheus/jmx_prometheus_javaagent-0.3.0.jar=7072:/opt/kafka/prometheus/kafka-0–8–2.yml”
JMX_PORT=7072
/opt/kafka/bin/kafka-console-consumer.sh — bootstrap-server 0.0.0.0:9092 — topic test — from-beginning
#start kafka producer in terminal 3
KAFKA_OPTS=”-javaagent:/opt/kafka/prometheus/jmx_prometheus_javaagent-0.3.0.jar=7073:/opt/kafka/prometheus/kafka-0–8–2.yml”
JMX_PORT=7073
/opt/kafka/bin/kafka-console-producer.sh — broker-list 0.0.0.0:9092 — topic test
After above steps zookeeper and kafka is running fine.
I can type in producer terminal a message and it is received on consumer console. However no kafka metrics is visible on Prometheus. To debug this I checked ports 7071/2/3 by
netstat -tlnp | grep 7071
netstat -tlnp | grep 7072
netstat -tlnp | grep 7073
which resulted in blank response; this means no service is using above ports. I feel like JMX exporter is not enabled correctly.
Can you help me with above issues?

From the looks of your question, you put the variables on their own lines, while the blog has them on the same line...
e.g. This is how to start the Kafka server
KAFKA_HEAP_OPTS='"-Xmx1000M -Xms1000M"' KAFKA_OPTS='-javaagent:/opt/kafka/prometheus/jmx_prometheus_javaagent-0.3.0.jar=7071:/opt/kafka/prometheus/kafka-0–8–2.yml' JMX_PORT=7081 /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
Otherwise, you need to export the variables so the sub-process will pick them up like you did in your previous question, which seemed to be working fine for exposing the metrics
export KAFKA_HEAP_OPTS='"-Xmx1000M -Xms1000M"'
export KAFKA_OPTS='-javaagent:/opt/kafka/prometheus/jmx_prometheus_javaagent-0.3.0.jar=7071:/opt/kafka/prometheus/kafka-0–8–2.yml'
export JMX_PORT=7081
/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
Note: The blog you linked to doesn't use JMX_PORT, but those ports cannot be the same as the exporter.
I would also suggest at least downloading a version newer than 0.3 - https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/
and using the configs for Kafka 2.0 - https://github.com/prometheus/jmx_exporter/blob/master/example_configs/kafka-2_0_0.yml
Sidenote: netstat -tlnp | grep 707 would show you all them at once

thank you cricket-007 for your help.
I am listing steps i followed here for simplicity
wget -q -O /tmp/kafka.tgz https://archive.apache.org/dist/kafka/2.0.0/kafka_2.11-2.0.0.tgz
tar -xf /tmp/kafka.tgz --directory /opt/kafka --strip 1
rm -f /tmp/kafka.tgz
wget -q -O /tmp/kafka.tgz https://archive.apache.org/dist/kafka/2.0.0/kafka_2.11-2.0.0.tgz
mkdir /opt/kafka
tar -xf /tmp/kafka.tgz --directory /opt/kafka --strip 1
rm -f /tmp/kafka.tgz
sudo wget -P /opt/kafka/prometheus/ https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.12.0/jmx_prometheus_javaagent-0.12.0.jar
wget https://raw.githubusercontent.com/prometheus/jmx_exporter/master/example_configs/kafka-2_0_0.yml
cd kafka
export KAFKA_OPTS="-javaagent:/opt/kafka/prometheus/jmx_prometheus_javaagent-0.12.0.jar=7071:/opt/kafka/prometheus/kafka-2_0_0.yml"
export KAFKA_HEAP_OPTS="-Xmx1000M -Xms1000M"
mv ../kafka-2_0_0.yml prometheus/
/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
netstat -tlnpu |grep 70
tcp6 0 0 :::7071 :::* LISTEN 209455/java
udp6 0 0 :::40705 :::*
curl -s localhost_or_IP:7071 | grep -i kafka
long list of metrics will be dumped on stdout -

Related

Running Elasticsearch-7.0 on a Travis Xenial build host

The Xenial (Ubuntu 16.04) image on Travis-CI comes with Elasticsearch-5.5 preinstalled. What should I put in my .travis.yml to run my builds against Elasticsearch-7.0?
Add these commands to your before_install step:
- curl -s -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.0.1-amd64.deb
- sudo dpkg -i --force-confnew elasticsearch-7.0.1-amd64.deb
- sudo sed -i.old 's/-Xms1g/-Xms128m/' /etc/elasticsearch/jvm.options
- sudo sed -i.old 's/-Xmx1g/-Xmx128m/' /etc/elasticsearch/jvm.options
- echo -e '-XX:+DisableExplicitGC\n-Djdk.io.permissionsUseCanonicalPath=true\n-Dlog4j.skipJansi=true\n-server\n' | sudo tee -a /etc/elasticsearch/jvm.options
- sudo chown -R elasticsearch:elasticsearch /etc/default/elasticsearch
- sudo systemctl start elasticsearch
The changes to jvm.options are done in an attempt to emulate the existing config for Elasticsearch-5.5, which I assume the Travis peeps have actually thought about.
According to the Travis docs, you should also add this line to your before_script step:
- sleep 10
This is to ensure Elasticsearch is up and running, but I haven't checked if it's actually necessary.
One small addition to #kthy answer that had me stumbling for a bit. You need to remove - elasticsearch from your services: definition in the .travis.yml otherwise no matter what you put in before_install, the default service will override it!
services:
- elasticsearch
Remove ^^ and then you can proceed with the steps he outlined and it should all work smoothly.
if you want to wait for the elastic search to start (which may be longer or shorter than 10 seconds) replace the sleep 10 with this:
host="localhost:9200"
response=""
attempt=0
until [ "$response" = "200" ]; do
if [ $attempt -ge 25 ]; then
echo "FAILED. Elasticsearch not responding after $attempt tries."
exit 1
fi
echo "Contacting Elasticsearch on ${host}. Try number ${attempt}"
response=$(curl --write-out %{http_code} --silent --output /dev/null "$host")
sleep 1
attempt=$[$attempt+1]
done

Continuously listen to tcp port via terminal

Is it possible to listen to a port continuously?
I listen for incoming tcp notifications with following command
sudo nc -l -p 999
But as soon as notification arrives I have to restart listen with same command. Is it possible to listen to port without having to restart command when notifications arrives until user decides to abort listen?
Sorta outdated question, but came up first on my Google search.
In order for netcat not to shutdown as soon as the first connection is received, you can add the -k option.
From the man:
-k Forces nc to stay listening for another connection after its current connection is completed. It is an error to use this option without the -l option.
Src: https://superuser.com/a/708133/410908
Solved with a simple bash script
#!/bin/bash
#Make Sure Script Is Ran As Root
if [ $(id -u) != 0 ]; then
echo; echo -e "\e[1;31mScript must be run as sudo. Please Type \"sudo\" To Run As Root \e[0m"; echo
exit 1
fi
echo "Enter port to listen"
read portL
while true;
do
nc -l -p $portL
done
exit 0
Thanks dreamlax for the tip!

Starting amqp-consume on Debian 8

I used to consume messages from amqp-consume with this command below at debian 7, but I installed debian 8 I think the amqp-tools is different and it does not recognize my command.
I noticed some changes. My web interface change the port from 55672 to 15672.
amqp-consume -d -q queue.udrive.admin.uiscsi -s 10.0.1.251 -p 5672 -e "directExchangeUdrive" --vhost "/" -r "" --username=guest --password=guest /bin/bash remoteManageUiSCSI.sh
error: both --server and --url options specify server host
I think the command expects it:
amqp-consume
consuming command not specified
Usage: amqp-consume [-dxA?] [-u|--url=amqp://...] [-s|--server=hostname] [--port=port] [--vhost=vhost] [--username=username] [--password=password] [--ssl] [--cacert=cacert.pem] [--key=key.pem] [--cert=cert.pem] [-q|--queue=queue] [-e|--exchange=exchange] [-r|--routing-key=routing key] [-d|--declare] [-x|--exclusive] [-A|--no-ack] [-c|--count=limit] [-p|--prefetch-count=limit] [-?|--help] [--usage] [OPTIONS]... <command> <args>
I tried all kinds of things on amqp:// and it dodn't work.
I got the answer at other site https://qpid.apache.org/releases/qpid-0.30/programming/book/QpidJNDI.html but I still wonder to know why this answer was not at the "man amqp-consume" or rabbitmq web site....
The command works for me is:
amqp-consume -d -u amqp://test:test#ustorageprod/%2f -q queue.udrive.admin.uiscsi -e "directExchangeUdrive" -r "" /bin/bash remoteManageUiSCSI.sh
amqp-publish -u amqp://test:test#ustorageprod/%2f -r "queue.udrive.ustorage" -e "directExchangeUdrive" -b "$msg"

Bash script upd error

I execute my bash script PLCCheck as process
./PLCCheck &
PLCCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OKConnection" | netcat -u -c $HOST $PORT
done < <(netcat -u -l -p 6001)
It listens on UDP Port 6001.
When I want to execute my second bash script SQLCheck as process that listens on UDP Port 4001
./SQLCheck &
SQLCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OPENEF1" | netcat -u -c $HOST $PORT
done < <(nc -l -p 4001)
I got this error:
Error: Couldn't setup listening socket (err=-3)
Port 6001 and 4001 are open in the iptables and both scripts work as a single process. Why do I get this error?
I have checked the man page of nc. I think it is used on a wrong way:
-l Used to specify that nc should listen for an incoming connection rather
than initiate a connection to a remote host. It is an error to use this
option in conjunction with the -p, -s, or -z options. Additionally,
any timeouts specified with the -w option are ignored.
...
-p source_port
Specifies the source port nc should use, subject to privilege restrictions
and availability. It is an error to use this option in conjunction with the
-l option.
According to this one should not use -l option with -p option!
Try to use without -p, just nc -l 4001. Maybe this is the error...

How to make an Echo server with Bash?

How to write a echo server bash script using tools like nc, echo, xargs, etc capable of simultaneously processing requests from multiple clients each with durable connection?
The best that I've came up so far is
nc -l -p 2000 -c 'xargs -n1 echo'
but it only allows a single connection.
If you use ncat instead of nc your command line works fine with multiple connections but (as you pointed out) without -p.
ncat -l 2000 -k -c 'xargs -n1 echo'
ncat is available at http://nmap.org/ncat/.
P.S. with the original the Hobbit's netcat (nc) the -c flag is not supported.
Update: -k (--keep-open) is now required to handle multiple connections.
Here are some examples. ncat simple services
TCP echo server
ncat -l 2000 --keep-open --exec "/bin/cat"
UDP echo server
ncat -l 2000 --keep-open --udp --exec "/bin/cat"
In case ncat is not an option, socat will also work:
socat TCP4-LISTEN:2000,fork EXEC:cat
The fork is necessary so multiple connections can be accepted. Adding reuseaddr to TCP4-LISTEN may be convenient.
netcat solution pre-installed in Ubunutu
The netcat pre-installed in Ubuntu 16.04 comes from netcat-openbsd, and has no -c option, but the manual gives a solution:
sudo mknod -m 777 fifo p
cat fifo | netcat -l -k localhost 8000 > fifo
Then client example:
echo abc | netcat localhost 8000
TODO: how to modify the input string value? The following does not return any reply:
cat fifo | tr 'a' 'b' | netcat -l -k localhost 8000 > fifo
The remote shell example however works:
cat fifo | /bin/sh -i 2>&1 | netcat -l -k localhost 8000 > fifo
I don't know how to deal with concurrent requests simply however.
what about...
#! /bin/sh
while :; do
/bin/nc.traditional -k -l -p 3342 -c 'xargs -n1 echo'
done

Resources