sasl mechanism using librdkafka in RHEM - http-proxy

how to configure sasl mechanism using kerberos in librdkafka library in RHEM OS.
I have already set:
WITH_SASL =y
Installed the libsasl2 package

Follow these steps:
I'm guessing you are on RHEL so make sure to install the following packages first: cyrus-sasl cyrus-sasl-devel cyrus-sasl-gssapi
Then run ./configure from the librdkafka directory, check the output and make sure it lists WITH_SASL y.
Run make and sudo make install
Find out what port the broker's SASL_PLAINTEXT listener is listening on by either asking your Kafka ops team or by looking at the listeners=.. configuration property in the broker's server.properties file.
Follow the steps outlined in this Wiki post to set up keytabs, etc: https://github.com/edenhill/librdkafka/wiki/Using-SASL-with-librdkafka
Verify that it works with one of the example programs, e.g: examples/rdkafka_example -b <broker>:<sasl_port> -L -d security -X security.protocol=SASL_PLAINTEXT -X sasl.kerberos.service.name=<service> -X sasl.kerberos.keytab=/path/to/clients.keytab -X sasl.kerberos.principle=<clientname>/<clienthost>
When you have it working with the example program, move the configuration properties into your program (rd_kafka_conf_set() et.al)
Also see the more detailed SASL documentation here:
http://docs.confluent.io/3.1.1/kafka/sasl.html

Related

mosquitto broker will not log to file

I installed an updated eclipse mosquitto broker on a Windows host for my home alarm and control MQTT network. Had a bit of trouble getting it to listen for remote clients, but got everything working with the existing clients.
The trouble is I can't get the service to log no matter what I put in mosquitto.conf. If I run it directly from a command prompt (mosquitto -v) it logs to the console, and always says 'Using default config'. Does this mean it can't locate the CONF file? I've tried several versions of mosquitto going back to my original 1.6 and they all do the same thing.
This setting:
log_dest file c:\projects#lab\mosquitto.log
is the problem. I see where it says that a Windows service defaults to 'log_dest none', but I assumed I could overwrite that.
I doubt # is valid in a path name.
Also mosquitto doesn't have a default config file name, you must pass it on the command line with -c option.
The service picks up mosquitto.conf from the install dir, but only when running as the service
Also just to be clear, -v overrides all logging options (including writing to a file). From the man page:
-v, --verbose
Use verbose logging. This is equivalent to setting log_type to all in the configuration file. This overrides and logging options
given in the configuration file.

How do I migrate a nifi 1.10.0 flow.xml.gz to 1.14 or newer versions: sensitive properties

I have a dataflow running in NiFi 1.10.0, the relevant properties from this installation is here:
nifi.sensitive.props.key=
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
I am trying to migrate the flowfile to the 1.15.2 install where the properties are
nifi.sensitive.props.key=<redacted>
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=NIFI_PBKDF2_AES_GCM_256
nifi.sensitive.props.additional.keys=
Found this section in the NiFi admin guide to help with the migration.
Has anyone done this, what command options did you use?
Also is this a two step process since I am going from a blank key to a non-empty one and also changing the algorithm at the same time?
I used this command and the conversion works fine when you don't change the algorithm. Basically just setting a key when it was not set in the earlier 1.10.0 install.
$ ./nifi-toolkit-1.15.2/bin/encrypt-config.sh -f /path/to/nifi/nifi-1.10.0/conf/flow.xml.gz -g /path/to/nifi/nifi-1.15.2/conf/flow.xml.gz -s new_password -n /path/to/nifi/nifi-1.10.0/conf/nifi.properties -o /path/to/nifi/nifi-1.15.2/conf/nifi.properties -x
How do you change the algorithm and set the key at the same time?
Thanks
Issue can be resolved by following steps
Before migration if you don't have nifi.sensitive.props.key set, set it using following command ${NIFI_TOOLKIT_PAT}/bin/encrypt-config.sh -f /opt/nifi/nifi-current/data/flow.xml.gz -p ${NIFI_HOME}/conf/nifi.properties -s <NEW_KEY_TO_SET> -x
Once key is set upgrade nifi. Since in newer version algorithm is changed set it using command ${NIFI_HOME}/bin/nifi.sh set-sensitive-properties-algorithm <NEW_ALGORITHM>
Once algorithm set, encrypt again using command ${NIFI_TOOLKIT_PAT}/bin/encrypt-config.sh -f /opt/nifi/nifi-current/data/flow.xml.gz -p ${NIFI_HOME}/conf/nifi.properties -s <NEW_KEY_TO_SET> -x
Now you will get all compatible files with respect your latest version

How do I help mitigate log4j via haproxy on Enterprise Linux

Based on this post, haproxy has provided mitigation acls rules that can be used to help fight against log4j attack requests getting proxied to the affected log4j apps.
In reading some of the users comments, It came to my attention that many Enterprise Linux Haproxy system out there might be running an older haproxy version 1.5 which doesn't have the option http-buffer-request directive. This directive is critical to any CVE you are trying to mitigate, even more so if this is your only option until you can upgrade the affected applications.
what is the easiest way to upgrade and to what version?
Answering my own question...
Haproxy 1.6 is needed, however RH provides haproxy 1.8 from the RHSCL repo
Make sure these repos are active on the system and install haproxy1.8
subscription-manager repos --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-rpms --enable=rhel-server-rhscl-7-rpms
yum install -y rh-haproxy18.x86_64
cat the current config into the 1.8 cfg file and run a quick test config
cat /etc/haproxy/haproxy.cfg > /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg
/opt/rh/rh-haproxy18/root/usr/sbin/haproxy -c -V -f /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg
^ correct any warns or alert errors, In my experience, its mainly directive order or check port strings on your backend services. this is relatively simple to search the net for and correct
nano /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg
Add the ACLS Rules, use unique names if using more than one listener/frontend in the cfg so they don't overlap
systemctl disable --now haproxy && systemctl enable --now rh-haproxy18-haproxy
yum remove -y haproxy && yum install -y rh-haproxy18-haproxy-syspaths.x86_64
^ the rh-haproxy18-haproxy-syspaths.x86_64 package replaced the haproxy service name so you do not have to update the systemctl scripts or even keepalived if you are using it to monitor haproxy service
I use this within keepalived which works before and after
killall -0 haproxy
HAProxy have blogged about this topic.
December/2021 – CVE-2021-44228: Log4Shell Remote Code Execution Mitigation

How do I get a custom Nagios plugin to work with NRPE?

I have a system with no internet access where I want to install some Nagios monitoring services/plugins. I installed NRPE (Nagios Remote Plugin Executor), and I can see commands defined in it, like check_users, check_load, check_zombie_procs, etc.
command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
...
I am able to run the commands like so:
/usr/local/nagios/libexec/check_nrpe -H 127.0.0.1 -c check_load
This produces an output like:
OK - load average: 0.01, 0.13, 0.12|load1=0.010;15.000;30.000;0; load5=0.130;10.000;25.000;0; load15=0.120;5.000;20.000;0;
or
WARNING – load average per CPU: 0.06, 0.07, 0.07|load1=0.059;0.150;0.300;0; load5=0.069;0.100;0.250;0; load15=0.073;0.050;0.200;0;
Now, I want to define/configure/install some more services to monitor. I found a collection of services here. So, say, I want to use the service defined here called check_hadoop_namenode.pl. How do I get it to work with NRPE?
I tried copying the file check_hadoop_namenode.pl into the same directory where other NRPE services are stored, i.e., /usr/lib/nagios/plugins. But it doesn't work:
$ /usr/local/nagios/libexec/check_nrpe -H 127.0.0.1 -c check_hadoop_namenode.pl
I figured this might be obvious because all other services in that directory are binaries, so I need a binary for check_hadoop_namenode.pl file as well. How do I make the binary for it?
I tried installing the plugins according to the description in the link. But it just tries to install some package dependencies, and throws error as it cannot access the internet (my system has no internet access, like I stated before). This error persists even when I install these dependencies manually in another system and copy them to the target system.
$ <In another system with internet access>
mkdir ~/repos
git clone https://github.com/harisekhon/nagios-plugins
cd nagios-plugins
sudo nano Makefile
# replace 'yum install' with 'yumdownloader --resolv --destdir ~/repos/'
# replace 'pip install' with 'pip download -d ~/repos/'
This downloaded 43 dependencies (and dependencies of dependencies, and so on) required to install the plugins.
How do I get it to work?
check_users, check_load or check_zombie_procs are defined on the client side in nrpe.cfg file. Default location are /usr/local/nagios/etc/nrpe.cfg or /etc/nagios/nrpe.cfg. As I read, you already found that file, so you can move to next step.
Put something like this to your nrpe.cfg:
command[check_hadoop_namenode]=/path/to/your/custom/script/check_hadoop_namenode.pl -optional -arguments
Then you need restart NRPE deamon service on client. Something like service nrpe restart.
Just for you information, these custom script doesn't must to be binaries, you can even use simple bash script.
And finally after that, you can call the check_hadoop_namenode command from Nagios server or via local NRPE deamon:
/usr/local/nagios/libexec/check_nrpe -H 127.0.0.1 -c check_hadoop_namenode

Installing Membase from source

I am trying to build and install membase from source tarball. The steps I followed are:
Un-archive the tar membase-server_src-1.7.1.1.tar.gz
Issue make (from within the untarred folder)
Once done, I enter into directory install/bin and invoke the script membase-server.
This starts up the server with a message:
The maximum number of open files for the membase user is set too low.
It must be at least 10240. Normally this can be increased by adding
the following lines to /etc/security/limits.conf:
Tried updating limits.conf as suggested, but no luck it continues to pop up the same message and continues booting
Given that the server is started I tried accessing memcached over port 11211, but I get a connection refused message. Then figured out (netstat) that memcached is listening to 11210 and tried telneting to port 11210, unfortunately the connection is closed as soon as I issue the following commands
stats
set myvar 0 0 5
Note: I am not getting any output from the commands above {Yes: stats did not show anything but still I issued set.}
Could somebody help me build and install membase from source? Also why is memcached listening to 11210 instead of 11211?
It would be great if somebody could also give me a step-by-step guide which I can follow to build from source from Git repository (I have not used autoconf earlier).
P.S: I have tried installing from binaries (debian package) on the same machines and I am able to successfully install and telnet. Hence not sure why is build from source not working.
You can increase the number of file descriptors on your machine by using the ulimit command. Try doing (you might need to use sudo as well):
ulimit -n 10240
I personally have this set in my .bash_rc so that whenever I start my terminal it is always set for me.
Also, memcached listens on port 11210 by default for Membase. This is done because Moxi, the memcached proxy server, listens on port 11211. I'm also pretty sure that the memcached version used for Membase only listens for the binary protocol so you won't be able to successfully telnet to 11210 and have commands work correctly. Telneting to 11211 (moxi) should work though.

Resources