Multiple Inputs for Single Rule Set (Filtering/dropping in a single location) - rsyslog

We're attempting to apply a single filter "0_MasterFilter.conf" to filter/drop all messages with certain IP's and hostnames coming in on ALL configured listening ports, in a single location, in order to reduce administrative overhead.
We're attempting to use a single ruleset "rsyslog_rules" only. Then have multiple inputs for all of the different listening ports. Will the following work? Or is there a better way?
0_MasterFilter.conf
ruleset (name=rsyslog_rules) {
if $fromhost starts with 'bilbo' or $fromhost-ip == '0.1.1.1' then { stop }
}
1_Port514.conf
ruleset (name=rsyslog_rules) {
if $fromhost starts with 'testbox' or $fromhost-ip == '0.2.2.2' then { stop }
set $!dev='syslog_server'
set $!loc='net1'
action (type="omfile" dynafile="514_serverX")
}
input (type="imptcp" port="514" ruleset="rsyslog_rules")
input (type="impudp" port="514" ruleset="rsyslog_rules")
2_Port600.conf
ruleset (name=rsyslog_rules) {
if $fromhost starts with 'lost' or $fromhost-ip == '0.3.3.3' then { stop }
set $!dev='dum_machine'
set $!loc='backroom'
action (type="omfile" dynafile="600_test")
}
input (type="imptcp" port="600" ruleset="rsyslog_rules")

You cannot define a ruleset more than once, so ruleset(name="rsyslog_rules"){...} can only appear once. Note that the name must be in quotes. Also starts with is one word. Do a syntax check with rsyslogd -N1 -f myconfig.conf.
If you want to have a set of rules that apply to all inputs, but also have
individual rules that only apply to some of the inputs, then you can put
all the common rules in one ruleset, and bind a new independent ruleset to
each input, but call the common ruleset from these independent rulesets.
For example:
0_MasterFilter.conf
ruleset (name="rsyslog_rules") {
if $fromhost starts with 'bilbo' or $fromhost-ip == '0.1.1.1' then { stop }
}
1_Port514.conf
ruleset (name="special1") {
call rsyslog_rules
if $fromhost starts with 'testbox' or $fromhost-ip == '0.2.2.2' then { stop }
set $!dev='syslog_server'
set $!loc='net1'
action (type="omfile" dynafile="514_serverX")
}
input (type="imptcp" port="514" ruleset="special1")
input (type="impudp" port="514" ruleset="special1")
The call command can be given anywhere in a ruleset. Note that the name
is not put in quotes ("").

Related

How to remove first few lines of CSV in Logstash

This is the input I am using for logstash.
ItemId,AssetId,ItemName,Comment
11111,07,ABCDa,XYZa
11112,07,ABCDb,XYZb
11113,07,ABCDc,XYZc
11114,07,ABCDd,XYZd
11115,07,ABCDe,XYZe
11116,07,ABCDf,XYZf
11117,07,ABCDg,XYZg
Date,Time,Mill Sec,rows,columns
19-05-2020,13:03:46,534,2,2
19-05-2020,13:03:46,539,2,2
19-05-2020,13:03:46,544,2,2
19-05-2020,13:03:46,549,2,2
19-05-2020,13:03:46,554,2,2
I need to remove first 8 lines from the csv and make the next line as column header and parse rest of lines as usual. Is there a way to do that in logstash?
You could do this using the file input and then read it line by line using grok to make sure it has the right amount of fields comma separated and ignore the header one
Your input will look like this:
input {
file {
path => "/path/to/my.csv"
start_position => beginning
}
}
This will read each line into an event with the data in the field named message and then send it to your filters.
In your filter you'll use grok with a pattern like this:
filter {
grok {
match => { "message" => [
"^%{DATE:Date},%{TIME:Time},%{NUMBER:Mill_Sec},%{NUMBER:rows},%{NUMBER:colums}$"
]
}
}
}
This will present each line as an event looking like this:
{
"colums": "2",
"Time": "13:03:46",
"Mill_Sec": "554",
"rows": "2",
"Date": "19-05-2020"
}
You can use mutate to remove unwanted fields (like message) prior to going to your output part. If there is no match with the pattern defined you'll get a tag with the value _grokparsefailure in your tags, you can use that to decide to send it to your output or not. As you defined that it has to be numbers, it will also fail on the header one and thus leave you with only 'real' events.
This can be done by having your output defined like this:
output {
if "_grokparsefailure" not in [tags] {
elasticsearch {
...
}
}
}
You should do this before the file gets to Logstash. There are ways to do it within Logstash, for example by using a mutliline code then doing exotic grok matches to remove the first N lines (or removing lines until a particular regex), then doing a split followed by a plain ol' csv filter. You need to be even more careful than usual with header rows. It's a big mess.
Much better to put something in front of Logstash to handle this issue.
If the files are local to your logstash instance, you could use the Exec input plugin to deal with the irregularities.
input {
exec {
command => "/path/to/command_or_script" # sh or py or js etc
interval => 60
}
}
On Linux, this command will print a file from the 8th line on...
command => "tail +8 /path/to/file"
This one (again for Linux) will drop everything until a line that starts with date, and print everything after that
command => "sed -n -e '/^date/,$p' /path/to/file"
You can avoid read the same file over and over again by deleting or archiving it in a script (rather than a one-liner as used in these examples)
After trimming the unwanted leading lines, you should be able to use the csv filter in a normal way.
Note that if you want to autodetect_column_names that pipeline workers must be set to 1.
Your content is not CSV format. Your task is convert it to true CSV format.

Using gradle typed tasks how can we exclude different types of files?

Using Gradle typed task how can we exclude file copy for file names starting with as well as ending with some strings?
def contentSpec = copySpec {
exclude {
it.file.name.startsWith('img')
it.file.name.endsWith('gif')
}
from 'src'
}
task copyImages (type: Copy) {
with contentSpec
into 'Dest'
}
On running gradle copyImages, it excludes files ending with gif, but does not exclude files starting with img.
Is there a way to achieve both?
You forgot an or (||) between your two conditions:
exclude { it.file.name.startsWith('img') || it.file.name.endsWith('gif') }
The value of a closure is the value of its last expression. Since the last expression, in your code, is it.file.name.endsWith('gif'), that's the value of the closure, and the file is thus excluded when it.file.name.endsWith('gif') is true.
Of course, you could also use two exclusions:
exclude {
it.file.name.startsWith('img')
}
exclude {
it.file.name.endsWith('gif')
}

How to run multiple filters on various file types in processResources

I'm trying to do some processing on some source before moving it to the build directory. Specifically, for files with a .template name, replace instances of #timestamp# with the timeStamp variable i've defined. Additionally, for .jsp files, I would like to remove whitespace between tags.
The first part of this, replacing the timestamp works. Replacing the whitespace in the jsps does not.
processResources {
def timeStamp = Long.toString(System.currentTimeMillis())
from ('../src/resources/webapp') {
include '**/*.template'
filter {
it.replace('#timestamp#', timeStamp)
}
rename '(.*)\\.template', '$1'
}
from ('../src/resources/webapp') {
include '**/*.jsp'
filter {
it.replace('>\\s+<', '><')
}
}
}
Previous to using processResources, I had done something like this for the minification:
task minifyJSPs(type: Copy) {
from ('../src/resources/webapp') {
include '**/*.jsp'
filter {
it.replace('>\\s+<', '><')
}
}
into 'gbuild'
}
Filtering the files like this using copy worked, however, I noticed I wasn't able to copy from a location to itself -- the file would end up empty. This meant that I had to copy files from one intermediate directory to another, applying a filter at each step.
I want to be able to apply various transformations to my source in one step. How can I fix my processResources task?

custom war tasks and applying custom resources within Gradle

i want to have dynamic WAR tasks based on customer configuration. I created an array with the configuration names and tried to apply custom behavior as so:
ext.customerBuilds = ['customer1', 'customer2', 'customer3']
ext.customerBuilds.eachWithIndex() {
obj, i ->
task "dist_${obj}" (type:War) << {
from "etc/customers/${obj}/deploy"
println "I'm task number $i"
}
};
This creates my three tasks like dist_customer1, etc. Now i want that gradle uses the normal resources under src/main/webapp AND also my customer based ones under etc/customers/XXXX/deploy as stated in the from property.
But it doesnt pick up any file in this folder.
What i am doing wrong here? Thanks.
when setting up your War task ensure you don't accidently use the '<<' notation.
'<<' is just a shortcut for 'Task#doLast'. so instead do:
ext.customerBuilds = ['customer1', 'customer2', 'customer3']
ext.customerBuilds.eachWithIndex() { obj, i ->
task("dist_${obj}", type:War){
from "etc/customers/${obj}/deploy"
println "I'm task number $i"
}
};
You can just add more from statements to pickup stuff from 'src/main/webapp'.

How to get access to line number in file Copy filter task?

This is what I need:
I have log statements in my javascript files, they look like this:
log.d("some log message here")
I want to dynamically add fine name and line number to these during the copy task.
Ok, so adding the file name is probably easy enough, but how to I get access to line number? Strangely I could not find any information on how to do this.
The filter() method of Copy task just passing the actual line, it would be nice if it was passing 2 arguments - line and line number.
Here is the template of my task. I added comments of what I need to achieve.
I know I can get name of file from fileCopyDetails using fileCpyDetails.getSourceName() but I am stuck
on how to replace the lines that start with log.d() with a new log.d statement that has line number
I am really hoping someone can help me here.
task addLineNumbers(type: Copy) {
into 'build/deploy'
from 'source'
eachFile { fileCopyDetails ->
// Here I need to add line number to log.d("message")
// to become log.d("[$fileName::$line] + $message")
// for example, if original line in userdetails.js file was log.d("something logged here")
// replace with log.d("[userdetails.js::43] something logged here")
}
}
Not the most elegant solution, but works for me:
task addLineNumbers(type: Copy) {
into 'build/deploy'
from 'source'
def currentFile = ""
def lineNumber = 1
eachFile { fileCopyDetails ->
currentFile = fileCopyDetails.getSourceName()
lineNumber = 1
}
filter { line ->
if (line.contains('log.d("')) {
line = line.replace('log.d("', "log.d(\"[${currentFile}::${lineNumber}]")
}
lineNumber++
return line
}
}
The Copy task just copies files (it can filter/expand files using Ant filters or Groovy SimpleTemplateEngine). You're looking for a pre-processor of sorts. I think it's possible to do that with a custom Ant filter, but it seems like a lot of work.
I think what people typically do is use something like this at runtime to find the file/line number: How can I determine the current line number in JavaScript?

Resources