I'm fairly new to fluentd and I am not sure if it can do what I am trying right now. I am using it to collect log-data from a whole bunch of Docker-containers running on the same host, so the "hostname"-variable that is often discussed is not helping me. Instead, the log-data I receive includes a field "container_name".
Now what I would like is to have Fluentd write log-files with a name of "container_name-id_timestamp" but none of the ways I've tried so far have worked. I do not know how to evaluate fields from within the data packet.
I went ahead and matched everything like this:
<match *.**>
type file
path /var/log/fluent/
time_slice_format %Y-%m-%d
time_slice_wait 10m
time_format %Y-%m-%dT-%H-%M-%S-%z
</match>
And then tried all kinds of variables like
path /var/log/fluent/${container_name}_%Y-%m-%d
or
path /var/log/fluent/${tag_parts[2]}_%Y-%m-%d
But instead of interpreting it takes it literal. What am I missing?
I'd also be fine with subfolders for each container, which I am having the same problem with.
Thank you.
Anyone coming here looking for a solution, I've found out how to do it:
<match docker.*>
type rewrite_tag_filter
rewriterule1 container_name ^\/(.*)$ tagged.$1
</match>
<match tagged.*>
type forest
subtype file
remove_prefix service
<template>
time_slice_format %Y-%m-%d
path /var/log/fluentd/${tag}.*.log
</template>
</match>
What is happening?
Look for a match to i.e. the docker.325435abcd-tag
Use fluent_plugin_rewrite_tag_filter to get the container_name from within the data
Rewrite the tag with the container_name
Match the tagged data
Use the forest_plugin to template the log-file-name to use the tag
Done
Related
I need to monitor a log file that rotates every day in the same location. The format of the file is: filename.log.YY-MM-DD
To config rsyslog, I use wildcard to map filename.log.*, but I don't want to review old logs, just the actual day.
I tried to use date command in File parameter but its not recognized. Also with a variable is not recognized.
input(type="imfile"
File="/var/log/filename.log.*"
Tag="logstore""
Severity="info"
Facility="local4")
I expected just log the last filename.log.today, not all filename.log.*
I have a few lines of configurations that I need in my rsyslgog.
if $programname == 'project' then /var/log/file.log
When added to the end of the main rsyslog configuration file, /etc/rsyslog.conf, this configuration appears to be valid and functional.
However, when using the rsyslog.d directory I get a syntax error.
error during parsing file /etc/rsyslog.d/project.conf, on or before line 2: syntax error on token '==' [v8.32.0 try http://www.rsyslog.com/e/2207 ]
Is there anything in the main config that has to be parsed in advance, or is this a bug that should be reported to Fedora 27 developers?
As rsyslog author, I would assume that there is some include right in front of it that somehow renders your (valid) construct invalid. Red Hat unfortunately tends to stick to obsolete legacy format, and things like these can easily happen when this is used (after all, this was why we obsoleted it).
To hunt this down, I would check the config include that is included immediately in front of your own. If included via wildcards, the include order is sorted by filename.
Sorry, it was my bad. The configuration for my rsyslog config file was rewritten by my installer bash script, and that interpreted the $ sign as variable within the string. I should have double-checked the correctness of my config file.
I am trying to process log files with .gz extension in fluentd using cat_sweep plugin, and failed in my attempt. As shown in the below config, I am trying to process all files under /opt/logfiles/* location. However when the file format is .gz, cat_sweep is unable to process the file, and starts deleting the file, but if I unzip the file manually inside the /opt/logfiles/ location, cat_sweep is able to process, the file.
<source>
#type cat_sweep
file_path_with_glob /opt/logfiles/*
format none
tag raw.log
waiting_seconds 0
remove_after_processing true
processing_file_suffix .processing
error_file_suffix .error
run_interval 5
</source>
So now I need some plugin that can unzip a given file. I tried searching for plugins that can unzip a zipped file. I came close when I found about the plugin, which acts like a terminal, where I can use something like gzip -d file_path
Link to the plugin:
http://docs.fluentd.org/v0.12/articles/in_exec
But the problem I see here, is that I cannot send the path of the file to be unzipped at run-time.
Can someone help me with some pointers?
Looking at your requirement, you can still achieve it by using in_exec module,
What you have to do is, to simply create a shell script which accepts path to look for .gz files and the wildcard pattern to match file names. And inside the shell script you can unzip files inside the folder_path that was passed with the given wildcard pattern. Basically your shell execution should look like:
sh unzip.sh <folder_path_to_monitor> <wildcard_to_files>
And use the above command in in_exec tag in your config. And your config will look like:
<source>
#type exec
format json
tag unzip.sh
command sh unzip.sh <folder_path_to_monitor> <wildcard_to_files>
run_interval 10s
</source>
I wrote some liblognormalize rules to parse postgresql logs.
My rule file contains two rules and each rule has some tags like:
rule=POSTGRESQL,CHECKPOINT: ....
rule=POSTGRESQL,SLOWQUERY: ....
After running mmnormalize in my rsyslog configuration, I would like to know which rule actually matched the log line being processed. The simplest solution would be to get the tags. I know that mmnormalize export some variables like $parsesuccess. Is there any variable containing the tags of the rule used ?
I do not know where I found it in the docs but in a rsyslog config file that I wrote some time ago I found that I can access the list of tags that liblognorm assigns to a message via event.tags in rsyslog.
I have for example
template( # ...
property(name="$!event.tags")
)
or
if $!event.tags != "" then { # ...
The solution seems to be the annotate feature:
rule=POSTGRESQL,CHECKPOINT: ....
annotate=CHECKPOINT:+checkpoint="complete"
Basically the annotate line will add a field checkpoint containing the value complete to all the log lines matching the rules with a tag CHECKPOINT
Found it here
Trying to set up simple file copy processin spring-xd:
stream create --name mystrea --definition "file --dir=/path/source
--fixedDelay=5 | sink:file --dir=/path/dest --binary=true
--name=headers['file_name']"
This seems to create and append fils to the file header['file_name'].out in the dest folder
Looking at sink:file definition
<file:outbound-channel-adapter id="files"
mode="${mode}"
charset="${charset}"
directory="${dir}"
filename-generator-expression="'${name}' + '${extensionWithDot}'"/>
I see it puts '' around the name which causes it not to be evaluated.
Any suggestions besides create new sink:simplefile module that would do what I am looking for ? Am I missing something
Yes, the standard sink is not designed to do what you are trying to do (pass in an expression for the filename).
We should add an alternative property --fileNameExpression=... or similar.
In the meantime, you are correct, you'll need a custom sink (or modify the standard one).
I created a JIRA Issue for this enhancement.