I would like to write an application config in go using flags and then have it writeout into an .ini file.
I've done this with JSON files, but can't figure out the ini.
Any suggestions?
Ini file will look like so:
Name=*flag input
[Output]
Mode=*flag input
[Input]
BaseCX=*flag input
BaseCY=*flag input
Common=*flag input
Can't figure it out.
Related
I have a .conf file that has a label and variable that I'm trying to move to a .env file. How can I accomplish this?
This is what I have in my .conf file
[[inputs.snmp]] #Label
agents = ["1.1.1.1:111","2.2.2.2:111","2.3.3.3:111"] #Variable
version = 2 #Variable
I'm trying to have something like this in the .env file
VAR_FOR_CONF_FILE='[[inputs.snmp]]\n agents= ["1.1.1.1:111","2.2.2.2:111","2.3.3.3:111"]\n version=2'
I was hoping I could use $VAR_FOR_CONF_FILE in my .conf file instead of [[inputs.snmp]]agents = ["1.1.1.1:111","2.2.2.2:111","2.3.3.3:111"]
but I keep getting this error Error parsing data: line 7: invalid TOML syntax for $VAR_FOR_CONF_FILE (Im not sure if I'm getting this error because I have the syntax wrong in my .env file or I'm declaring $VAR_FOR_CONF_FILE incorrectly in my .conf file)
Am I doing it correctly (or is it even possible to do what I'm trying to accomplish)?
(Note: I'm trying to accomplish this so I can simply use $VAR_FOR_CONF_FILE instead of hard coding things in the .conf file)
I'm trying to use VAR_FOR_CONF_FILE in the .conf file
If the .conf file parsing program doesn't support variable substitution, there's no way around modifying the .conf file, but this can be automated:
sed -i "s/\<VAR_FOR_CONF_FILE\>/$VAR_FOR_CONF_FILE/" my.conf
I have a CSV file similar to below:
0,Bob's Business,50 some address,zip,telephone
1,Jill's Business,25 some address,zip,telephone
...
I would like to take this CSV file and have Pandoc produce a markdown file for each line in the CSV file. Each column accessible from a variable to be used in a markdown template file.
Is it possible to load a CSV file and produce markdown/html files in this way?
I can see three ways.
Use a static site generator
I would probably just use a tool like jekyll with its data files.
Alternative 1: Convert to YAML and use pandoc's template engine
Put something like this in mytemplate.md:
$for(data)$
$data$
$endfor$
Convert the csv to a JSON or YAML file
load that file with the --metadata-file option and use the template to render the output:
echo '' | pandoc --metadata-file data.yaml -t markdown --template mytemplate.md -o output.md
Alternative 2: Write a pandoc filter
There are many pandoc filters (like pandoc-placetable or pantable) that read csv and convert it to a pandoc table. But you want to convert it to a pandoc metadata format (which is usually parsed from the YAML frontmatter of markdown files). I guess you could adjust one of those pandoc filters to your purposes.
I am trying to process log files with .gz extension in fluentd using cat_sweep plugin, and failed in my attempt. As shown in the below config, I am trying to process all files under /opt/logfiles/* location. However when the file format is .gz, cat_sweep is unable to process the file, and starts deleting the file, but if I unzip the file manually inside the /opt/logfiles/ location, cat_sweep is able to process, the file.
<source>
#type cat_sweep
file_path_with_glob /opt/logfiles/*
format none
tag raw.log
waiting_seconds 0
remove_after_processing true
processing_file_suffix .processing
error_file_suffix .error
run_interval 5
</source>
So now I need some plugin that can unzip a given file. I tried searching for plugins that can unzip a zipped file. I came close when I found about the plugin, which acts like a terminal, where I can use something like gzip -d file_path
Link to the plugin:
http://docs.fluentd.org/v0.12/articles/in_exec
But the problem I see here, is that I cannot send the path of the file to be unzipped at run-time.
Can someone help me with some pointers?
Looking at your requirement, you can still achieve it by using in_exec module,
What you have to do is, to simply create a shell script which accepts path to look for .gz files and the wildcard pattern to match file names. And inside the shell script you can unzip files inside the folder_path that was passed with the given wildcard pattern. Basically your shell execution should look like:
sh unzip.sh <folder_path_to_monitor> <wildcard_to_files>
And use the above command in in_exec tag in your config. And your config will look like:
<source>
#type exec
format json
tag unzip.sh
command sh unzip.sh <folder_path_to_monitor> <wildcard_to_files>
run_interval 10s
</source>
I read my log files (cron_log, auth_log, mail_log, etc) using this config:
file{
path => '/path/to/log/file/*_log'
}
So I read my log files and check:
if(path) ~= "cron" -----match--------
if(path) ~= "auth" -----match--------
Now I have a directories like: Server1 Server2 Server3......In Server 1 there are subdirectories: authlog cronlog.....Inside authlog there are subdirectories date wise (like 2014.05.26, 2014.05.27) which finally contain log file for the day, which I have to parse.
So presently I was having one config file which use to read files using *_log and I use to run that config file and all log files present in /path/to/log/file/*_log were parsed.
Now I have to read from many directories (as explained above).
Will I have to write separate config file for each directory??
What's the best way to achieve this using logstash??
Ruby globs interpret ** as including all subdirectories.
So, for example, you could give the file input a path such as:
/path/to/date/folders/**/*_log
I wrote a Logparser Application and now I want to implement decompression of .gz files. I tried it with boost::iostreams and zlib which seems to work, but I don't know how to handle the input I get from compressed files.
Here's what I do:
input.open(p.source_at(i).c_str(), ios_base::in | ios_base::binary);
boost::iostreams::filtering_streambuf<boost::iostreams::input> in;
in.push(boost::iostreams::gzip_decompressor());
in.push(input);
boost::iostreams::copy(in, cout);
This code is run, if my sourcefile has the .gz ending. The last line outputs the decompressed filestream correctly to cout.
But how can i fetch line by line from the decompressed file? My Program uses getline(input, transfer) to read lines from the input stream, if it's not compressed.
Now I want to read from the decompressed file the same way, but how can I get a new line from in?
The boost decumentation didn't help me much with this.
Thanks in advance!
Ok I found it out. I just had to create an std::istream and pass a reference to the buffer:
std::istream incoming(&in);
getline(incoming, transfer);