Rsyslog create two listeners (with and without TLS) with omfile as output. Possible or not? - rsyslog

I am trying to create a rsyslog.conf with multiple listeners e.g. with and without TLS (with streamdriver). It is possible to create multiple inputs, but as I read in the rsyslog documentation, it seems to be impossible to move the streamdriver parameters e.g. streamdriver.mode="1" from module() to inputs() or to action() when using omfile. Does anybody know if there is a way to create multiple listeners with imtcp and omfile as output method?
my working script for single listener:
# Prints every message, even if repeated 1001 times in a second. Strongly recommend for use with Splunk
$RepeatedMsgReduction off
module(load="imtcp"
streamdriver.name="gtls" # use gtls netstream driver
streamdriver.mode="1" # require TLS for the connection
streamdriver.authmode="x509/name" # server is NOT authenticated
)
global(
defaultNetstreamDriverCAFile="/opt/splunk/etc/auth/sslCerts/CACertificate.pem"
defaultNetstreamDriverCertFile="/opt/splunk/etc/auth/sslCerts/ServerCertificate.pem"
defaultNetstreamDriverKeyFile="/opt/splunk/etc/auth/sslCerts/ServerPrivatKeyDec.key"
)
# Create as many inputs as you like. This listens to UDP + TCP 514.
input(type="imtcp" port="514" ruleset="SplunkNetwork")
# Template for directory + filename structure. Use %FROMHOST-IP% for IP without hostname resolution
template(name="filename-by-host" type="string" string="/opt/logfiles/%FROMHOST%/%$YEAR%-%$MONTH%-%$DAY%.log")
ruleset(name="SplunkNetwork") {
action(type="omfile" DynaFile="filename-by-host" DirCreateMode="0755" FileCreateMode="0644" DirOwner="splunk" DirGroup="splunk" FileOwner="splunk" FileGroup="splunk")
}
What I want to do - not working - passing the streamdriver parameters to input() or action():
# Prints every message, even if repeated 1001 times in a second. Strongly recommend for use with Splunk
$RepeatedMsgReduction off
module(load="imtcp")
global(
defaultNetstreamDriverCAFile="/opt/splunk/etc/auth/sslCerts/CACertificate.pem"
defaultNetstreamDriverCertFile="/opt/splunk/etc/auth/sslCerts/ServerCertificate.pem"
defaultNetstreamDriverKeyFile="/opt/splunk/etc/auth/sslCerts/ServerPrivatKeyDec.key"
)
# Create as many inputs as you like. This listens to UDP + TCP 514.
input(type="imtcp" port="514" ruleset="SplunkNetwork-anon-no-tsl")
input(type="imtcp" port="1514" ruleset="SplunkNetwork-anon-tsl")
# Template for directory + filename structure. Use %FROMHOST-IP% for IP without hostname resolution
template(name="filename-by-host" type="string" string="/opt/logfiles/%FROMHOST%/%$YEAR%-%$MONTH%-%$DAY%.log")
ruleset(name="SplunkNetwork-anon-no-tsl") {
action(type="omfile" DynaFile="filename-by-host" DirCreateMode="0755" FileCreateMode="0644" DirOwner="splunk" DirGroup="splunk" FileOwner="splunk" FileGroup="splunk" StreamDriverMode="0" StreamDriver="gtls" StreamDriverAuthMode="anon")
}
ruleset(name="SplunkNetwork-anon-tsl") {
action(type="omfile" DynaFile="filename-by-host" DirCreateMode="0755" FileCreateMode="0644" DirOwner="splunk" DirGroup="splunk" FileOwner="splunk" FileGroup="splunk" StreamDriverMode="1" StreamDriver="gtls" StreamDriverAuthMode="anon")
}

You may use
imtcp for TLS
imptcp for TCP

You an use both the imptcp and imtcp modules to allow plain TCP and TLS connections. The example below shows the rsyslog configuration required to setup the logging input for plain TCP on port 514 and TLS on port 1514.
global(
defaultNetstreamDriverCAFile="/opt/splunk/etc/auth/sslCerts/CACertificate.pem"
defaultNetstreamDriverCertFile="/opt/splunk/etc/auth/sslCerts/ServerCertificate.pem"
defaultNetstreamDriverKeyFile="/opt/splunk/etc/auth/sslCerts/ServerPrivatKeyDec.key"
)
# Load the imptcp module to provide the ability to receive messages over plain TCP
module(load="imptcp")
# Load the imtcp module to provide the ability to receive messages over TLS
module(
load="imtcp"
streamdriver.name="gtls" # use gtls netstream driver
streamdriver.mode="1" # require TLS for the connection
streamdriver.authmode="x509/name" # server is NOT authenticated
)
# Listen op port 514 (imptcp driver)
input(
type="imptcp"
port="514"
)
# Listen on port 1514 (imtcp driver)
input(
type="imtcp"
port="1514"
)

Related

TCP packets recieved on interface but not by Ruby TCPServer

I have a Ruby TCPServer that does not recieve TCP packets intermittently. Note the below is run inside another threads << Thread.new do as I have multiple TCP post listeners.
server = TCPServer.new(7207)
Thread.start(server.accept) do |client|
# process packet, send to AWS SQS
raw = ""
while (line = client.gets)
raw += line
end
sender = client.peeraddr
text = raw.unpack1("H*")
message_body = { payload: text, rx_at: Time.current, sender:}
puts "#{Time.current} : --- New uplink #{message_body}"
# send message to AWS SQS
client.close
end
I see the packets in tcpdump / wireshark but not on my TCPServer. I have a pcap file available:
https://www.dropbox.com/s/7m3hr1b7065tenx/tcp.pcap?dl=0
Example lost packets occurred on:
ip 10.0.225.43 at 27/07/2022 20:56:57 and 27/07/2022 20:39:31
Apologies after checking Wireshark this looks like a network problem not a Ruby / dev problem. The device was not sending FIN frames so the network stack was not publishing the packet up to my app.

Receive Gatling results in InfluxDB v2

I have a basic Gatling script on EC2 instance from which I want to push the results into an Influx database instance. I can successfully run a Gatling script and Influx is also running.
My Gatling configuration is the following:
data {
writers = [console, graphite] # The list of DataWriters to which Gatling write simulation data (currently supported : console, file, graphite)
console {
#light = false # When set to true, displays a light version without detailed request stats
#writePeriod = 5 # Write interval, in seconds
}
file {
#bufferSize = 8192 # FileDataWriter's internal data buffer size, in bytes
}
leak {
#noActivityTimeout = 30 # Period, in seconds, for which Gatling may have no activity before considering a leak may be happening
}
graphite {
light = false # only send the all* stats
host = "ec2-35-181-26-79.eu-west-3.compute.amazonaws.com" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
bufferSize = 8192 # Internal data buffer size, in bytes
writePeriod = 1 # Write period, in seconds
}
And for Influx, I've setup a Telegraf with the following configuration
[[outputs.influxdb_v2]]
## The URLs of the InfluxDB cluster nodes.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
## urls exp: http://127.0.0.1:8086
urls = ["http://ec2-35-181-26-79.eu-west-3.compute.amazonaws.com:8086"]
## Token for authentication.
token = "$INFLUX_TOKEN"
## Organization is the name of the organization you wish to write to; must exist.
organization = "Test"
## Destination bucket to write into.
bucket = "Test"
[[inputs.socket_listener]]
## URL to listen on
service_address = "tcp://:2003"
data_format = "graphite"
## Content encoding for message payloads, can be set to "gzip" to or
## "identity" to apply no encoding.
# content_encoding = "identity"
templates = [
"gatling.*.*.*.* measurement.simulation.request.status.field",
"gatling.*.users.*.* measurement.simulation.measurement.request.field"
]
With both Telegraf (with this configuration) and Influx running, I don't see any data pushed into the 'Test' bucket. Moreover I don't get any errors that could help me debugging.
Any help would be much appreciated. Thanks.

HAproxy+Lua: Return requests if validation fails from Lua script

We are trying to build an incoming request validation platform using HAProxy+Lua.
Our use-case is to create a LUA scripts that will essentially make a socket call to a Validation API, and based on the response
from Validation API we want to redirect the request to a backend API, and if the validation fails we would want to return
the request right from the LUA script. For example, for 200 response we would want to redirect the request to backend api, and for 404 we would want
to return the request. From the documentation, I understand that there are various default functions available
with Lua-Haproxy integration.
core.register_action() --> I'm using this. Take TXN as input
core.register_converters() --> Essentially used for string manipulations.
core.register_fetches() --> Takes TXN as input and returns string; Mainly used for representing dynamic backend profiles in haproxy config
core.register_init() --> Used for initialization
core.register_service() --> You have to return the response mandatorily while using this function, which doesn't satisfy our requirements
core.register_task() --> For using normal functions. No mandatory input class. TXN is required to fetch header details from request
I have tried all of the functions from above list, I understand that core.register_service is basically to return a response from the
Lua script. However, what is problematic is, we must send the response from the LUA script and it will not redirect the request to BACKEND.
Currently, I am using core.register_action to interrupt the requests, but I'm not able to return the request using this function. Here's
what my code looks like:
local http_socket = require("socket.http")
local pretty_print = require("pl.pretty")
function add_http_request_header(txn, name, value)
local headerName = name
local headerValue = value
txn.http:req_add_header(headerName, headerValue)
end
function call_validation_api()
local request, code, header = http_socket.request {
method = "GET", -- Validation API Method
url = "http://www.google.com/" -- Validation API URL
}
-- Using core.log; Print in some cases is a blocking operation http://www.arpalert.org/haproxy-lua.html#h203
core.Info( "Validation API Response Code: " .. code )
pretty_print.dump( header )
return code
end
function failure_response(txn)
local response = "Validation Failed"
core.Info(response)
txn.res:send(response)
-- txn:close()
end
core.register_action("validation_action", { "http-req", "http-res" }, function(txn)
local validation_api_code = call_validation_api()
if validation_api_code == 200 then
core.Info("Validation Successful")
add_http_request_header(txn, "test-header", "abcdefg")
pretty_print.dump( txn.http:req_get_headers() )
else
failure_response(txn) --->>> **HERE I WANT TO RETURN THE RESPONSE**
end
end)
Following is the configuration file entry:
frontend http-in
bind :8000
mode http
http-request lua.validation_action
#Capturing header of the incoming request
capture request header test-header len 64
#use_backend %[lua.fetch_req_params]
default_backend app
backend app
balance roundrobin
server app1 127.0.0.1:9999 check
Any help is much appreciated in achieving this functionality. Also, I understand that SOCKET call from Lua script is a blocking call, which is opposite to HAProxy's default nature of keep-alive connection. Please feel free to suggest any other utility to achieve this functionality, if you have already used it.
Ok I have figured out the answer to this question:
I created 2 backends for success and failure of requests, and based on the response I am returning 2 different strings. In "failure_backend", I have called a different service, which essentially is a core.register_service and can return the response. I'm pasting code for both the configuration file and lua script
HAProxy conf file:
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
maxconn 4000
user haproxy
group haproxy
daemon
#lua file load
lua-load /home/aman/coding/haproxy/http_header.lua
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
retries 3
timeout http-request 90s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend http-in
bind :8000
mode http
use_backend %[lua.validation_fetch]
default_backend failure_backend
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend success_backend
balance roundrobin
server app1 172.23.12.94:9999 check
backend failure_backend
http-request use-service lua.failure_service
# For displaying HAProxy statistics.
frontend stats
bind :8888
default_backend stats
backend stats
stats enable
stats hide-version
stats realm Haproxy Statistics
stats uri /haproxy/stats
stats auth aman:rjil#123
Lua script:
local http_socket = require("socket.http")
local pretty_print = require("pl.pretty")
function add_http_request_header(txn, name, value)
local headerName = name
local headerValue = value
txn.http:req_add_header(headerName, headerValue)
end
function call_validation_api()
local request, code, header = http_socket.request {
method = "GET", -- Validation API Method
url = "http://www.google.com/" -- Validation API URL
}
-- Using core.log; Print in some cases is a blocking operation http://www.arpalert.org/haproxy-lua.html#h203
core.Info( "Validation API Response Code: " .. code )
pretty_print.dump( header )
return code
end
function failure_response(txn)
local response = "Validation Failed"
core.Info(response)
return "failure_backend"
end
-- Decides back-end based on Success and Failure received from validation API
core.register_fetches("validation_fetch", function(txn)
local validation_api_code = call_validation_api()
if validation_api_code == 200 then
core.Info("Validation Successful")
add_http_request_header(txn, "test_header", "abcdefg")
pretty_print.dump( txn.http:req_get_headers() )
return "success_backend"
else
failure_response(txn)
end
end)
-- Failure service
core.register_service("failure_service", "http", function(applet)
local response = "Validation Failed"
core.Info(response)
applet:set_status(400)
applet:add_header("content-length", string.len(response))
applet:add_header("content-type", "text/plain")
applet:start_response()
applet:send(response)
end)

Ruby, get incoming address from UDP message

I have a UDP server that binds to all addresses on a system, I would like to know what ip address the message was addressed to. Any ideas how to do this?
Here is my example code:
sock = Socket.new(Socket::AF_INET, Socket::SOCK_DGRAM, 0)
sock.bind(Addrinfo.udp('', 2400))
while(true)
sockset = IO.select([sock])
sockset[0].each do |sock|
data = sock.recvfrom(1024)
puts "data: " + data.inspect
end
end
sock.close
This will produce something like:
data: ["test message\n", #<Addrinfo: 172.16.5.110:41949 UDP>]
Am I able to set a socket option, or something, to return the local IP?
Just a note, this needs to work for IPv6 too. Thanks in advance, Dave.
UNIX Network Programming has this to say about this very subject:
With a UDP socket, however, the destination IP address can only be obtained by setting the IP_RECVDSTADDR socket option for IPv4 or the IPV6_PKTINFO socket option for IPv6 and then calling recvmsg instead of recvfrom.
Ruby’s socket library has recvmsg which is a bit easier to use than the underlying C function, but still needs a bit of work to get the info needed. The destination IP address is included in the array of ancillary data returned from recvmsg. Here’s a version of your code adapted to use recvmsg and get the destination address for IPv4:
require 'socket'
require 'ipaddr'
sock = Socket.new(Socket::AF_INET, Socket::SOCK_DGRAM, 0)
# Set the required socket option:
sock.setsockopt :IPPROTO_IP, :IP_RECVDSTADDR, true
sock.bind(Addrinfo.udp('0.0.0.0', 2400))
while(true)
sockset = IO.select([sock])
sockset[0].each do |sock|
mesg, sender, _, *anc_data = sock.recvmsg
# Find the relevant data and extract it into a string
dest = IPAddr.ntop(anc_data.find {|d| d.cmsg_is?(:IP, :RECVDSTADDR)}.data)
puts "Data: #{mesg}, Sender: #{sender.ip_address}, Destination: #{dest}"
end
end
And here is a version for IPv6. There is also a RECVPKTINFO socket option, which I think may have superseded PKTINFO – depending on your system you may need to use that instead.
require 'socket'
sock = Socket.new(Socket::AF_INET6, Socket::SOCK_DGRAM, 0)
# Set the socket option for IP6:
sock.setsockopt :IPPROTO_IPV6, :IPV6_PKTINFO, true
sock.bind(Addrinfo.udp('0::0', 2400))
while(true)
sockset = IO.select([sock])
sockset[0].each do |sock|
mesg, sender, _, *anc_data = sock.recvmsg
# Find and extract the destination address
dest = anc_data.find {|d| d.cmsg_is?(:IPV6, :PKTINFO)}.ipv6_pktinfo_addr
puts "Data: #{mesg}, Sender: #{sender.ip_address}, Destination: #{dest.ip_address}"
end
end
Ruby also provides a Socket.udp_server_loop method, which yields the message and a UDPSource object to the block you provide, and this source object has a local_address field. Looking at the source this appears to check the PKTINFO data like I do above to get the destination address for IPv6 requests, but not for IPv4. This method binds to all available IP addresses individually, and just uses the address of the incoming interface for IP4 requests, which may not be accurate for a weak end system model. However it might be simpler for you to use Socket.udp_server_loop.

perl Socket6 binding to only one wildcard address

I have following program in perl which is supposed to listen on IPv6 address, and by theory should serve to both IPv4 (through IPv4 mapped IPv6 address) and IPv6 clients on a dual stack box.
use Socket;
use Socket6;
#res = getaddrinfo('', 8086, AF_UNSPEC, SOCK_STREAM,0, AI_PASSIVE);
my #ipv6Result;
while(scalar(#res)>=5){
my #currentResult = #res;
($family, $socktype, $proto, $saddr, $canonname, #res) = #res;
if($family == AF_INET6){
#ipv6Result = #currentResult;
}
}
if(#ipv6Result){
($family, $socktype, $proto, $saddr, $canonname) = #ipv6Result;
}
socket(Socket_Handle, $family, $socktype,$proto) || next;
bind(Socket_Handle,$saddr ) || die "bind: $!";
listen(Socket_Handle, 1) || die "listen: $!";
$paddr = accept(Client,Socket_Handle) || die "accept: $!";
After running this the netstat gave following observation:
c:\Perl\bin>netstat -nao | findstr 8086
TCP [::]:8086 [::]:0 LISTENING 2892
It seems, it is listening on only IPv6 wildcard address (::) and not on IPv4 wildcard address (0.0.0.0). I was not able to connect this server process from an IPv4 client, but was able to connect through an IPv6 client.
I tried a similar server program in java as follows (on the same setup):
import java.net.ServerSocket;
public class CodeTCPServer {
public static void main(String[] args) throws Exception{
new ServerSocket(8086).accept();
}
}
The netstat output for this was as follows:
C:\Users\Administrator>netstat -nao | findstr 8086
TCP 0.0.0.0:8086 0.0.0.0:0 LISTENING 3820
TCP [::]:8086 [::]:0 LISTENING 3820
Seems to listen on both IPv6 and IPv4, and I am also able to connect it from IPv4 and IPv6 clients.
If I run the same perl program on a linux box it works fine, and I am able to connect to it through IPv4 and IPv6 clients.
I wonder, if something on windows is stopping the perl program from listening on both IPv4 and IPv6 (but then it should have stopped the java program as well for the same reason). If some problem with the program logic it shouldn't have worked on linux as well.
(I am using Socket6 for now, as I couldn't use perl's inbuilt support for IPv6 somehow on windows, I am in communication with the authors to get it worked on my setup)
UPDATE:
I just tried following:
setsockopt (Socket_Handle, IPPROTO_IPV6, IPV6_V6ONLY, 0 ) or print("\nFailed to set IPV6_V6ONLY $! ");
in anticipation that the socket option has default value 1 (for this platform), and I have to manually override it, but alas! I got following error:
Your vendor has not defined Socket macro IPV6_V6ONLY, used at c:\socket6\Socket6Server.pl line 66
Now I wonder what does 'vendor' mean, is it Socket6 module / perl vendor or OS vendor ?
UPDATE2
I think the answer is given in http://metacpan.org/pod/IO::Socket::IP (for the V6Only argument)
with following lines:
If your platform does not support disabling this option but you still want to listen for both AF_INET and AF_INET6 connections you will have to create two listening sockets, one bound to each protocol.
And this worked for me! But then I need to check if the platform supports V6Only disabling
(protocol aware code in my program :( ), when compared to Java, Java does it automatically for me (checking and creating 2 sockets).
This requires the BIND_V6ONLY socket option to be turned off. See the IO::Socket::IP source for details on how.
Also, in response to your comment
I am using Socket6 for now, as I couldn't use perl's inbuilt support for IPv6 somehow on windows, I am in communication with the authors to get it worked on my setup)
That's not strictly true, if memory serves. You were having trouble with IO::Socket::IP but the plain Socket stuff should all be working fine. You don't need to be using Socket6 because Socket 2.006 already has everything that does. You can replace your code with:
use Socket qw( :addrinfo SOCK_STREAM AF_INET6 );
my ($err, #res) = getaddrinfo('', 8086,
{ socktype => SOCK_STREAM, flags => AI_PASSIVE });
my $ipv6Result;
my $current;
while(#res){
$current = shift #res;
if($current->{family} == AF_INET6) {
$ipv6Result = $current;
}
}
if($ipv6Result) {
$current = $ipv6Result;
}
socket(my $sock, $current->{family}, $current->{socktype}, $current->{proto}) or next;
bind(my $sock ,$current->{addr}) or die "bind: $!";
listen(my $sock, 1) or die "listen: $!";
my $paddr = accept(my $client, $sock) or die "accept: $!";

Resources