Can I move parts of chef.json to databags? - vagrant

I'm using vagrant + chef-solo + barkshelf plugin. In my Vagrantfile I have this:
chef.json = {
"postgresql" => {
"password" => {
"postgres" => "password"
}
},
"database" => {
"create" => ["chembl_18"]
},
"build_essential" => {
"compiletime" => true
}
}
if I add some more configuration here, my Vagrantile will quickly become messy. I was trying to chenge this, so I've created databags/postgresql/json_attribs.json file with following content:
{
"postgresql": {
"password": {
"postgres": "iloverandompasswordsbutthiswilldo"
}
}
}
and modified contents in my Vagrantfile:
chef.json = {
:postgresql => ["json_attribs"],
"database" => {
"create" => ["chembl_18"]
},
...
}
But it doesn't work, I'm getting:
==> default: 58>> default['postgresql']['dir'] = "/etc/postgresql/#{node['postgresql']['version']}/main"
...
==> default: [2014-05-14T12:36:51+00:00] ERROR: Running exception handlers
==> default: [2014-05-14T12:36:51+00:00] ERROR: Exception handlers complete
==> default: [2014-05-14T12:36:51+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
==> default: [2014-05-14T12:36:51+00:00] FATAL: TypeError: can't convert String into Integer
So how and where should I place this settings in order to split them into several separate files?

No, you can't just move it to a data bag without changing the recipe code.
Attributes (that you are supplying through your vagrant config) are distinct from data bags, which also store JSON data. Data in data bags is just "lying around" in Chef Server until you read it explicitly in your cookbooks.
Reading from data bags can happen through the search(), data_bag() and data_bag_item() functions, see documentation.
To avoid cluttering your Vagrantfile, you should either use a role or a wrapper cookbook.

Related

Function to read role, environment file in masterless puppet

I'm working with Puppet 4.5 in masterless configuration and am trying to create a Puppet function to read a simple config file that assigns roles and environments. I don't have any integration with hiera/facter that I can change.
The file format is:
host1::java_app_node::qa
host2::nodejs_app_node::prod
The Puppet function that will read this file is in a module called homebase. I want to function to return a hash or array of hashes that split the config values. This will let me use them in templates.
In modules/homebase/manifests/init.pp I define:
$role_file = 'puppet://role.lst'
I then created modules/homebase/functions/get_roles.pp as follows:
function homebase::get_roles() {
$func_name = 'homebase::get_roles()'
if ! File.exists?($::homebase::role_file) {
fail("Could not find #{$::homebase::role_file}")
}
hosts = { }
File.open($::homebase::role_file).each |line| {
parts = line.split(/::/)
hosts[parts[0]] = { 'host' => parts[0], 'role' => parts[1], 'env' => parts[2] }
}
return hosts
}
In other classes, I then want to call:
class myapp {
$servers = homebase::get_roles().each | k, v | {
$v['host'] if $v['role'] =~ /myapp/ && $v['env'] == $environment
}
file { 'myapp.cfg':
ensure => file,
path => '/opt/myapp/myapp.cfg',
source => template("/myapp/myapp.cfg.erb"),
mode => '0644',
owner => myuser,
group => myuser,
}
}
Seems like there would be a better way to do this. Am I completely off base?
There turned out to be a much easier way to this rather than try to create a function to read a non-standard configuration file. Instead, I used a site.pp file to create node {} entries. I also parameterized the myapp class to take inputs based on the node.
So my site.pp looks like:
node 'server1.mydomain', 'server2.mydomain' {
$myvar = [ 'val1', 'val2' ]
class { 'myapp':
values => $myvar
}
}
This could probably be improved. One of the issues is with a non-Puppet configuration file I was able to also able to control execution in my bash wrapper script. Much of the need for that went away when, though, with the node definitions.

logstash:how to use environment variables in input host

I wanna print 'host source' to output. For this goal, local or global variables is necessary. But I wanna not use the global variables like 'export ...'.
So before the input{}, I put the host in metadata then use in 'input{}'.
Like below:
filter{
environment{
add_field =>{
"[#metadata][TEMP]" => "127.0.0.1"
}
}
}
input{
udp{
host => "%{[#metadata][TEMP]}"
port => "10000"
}
}
output{
udp{
host => "127.0.0.1"
port => "10001"
}
}
But logstash is not running then log is like below:
[WARN ][logstash.inputs.udp ] UDP listener died {:exception=>#<SocketError: bind: name or service not known>known>
So how can solve this problem??
Let me try the answer your question in two steps.
The error message
Your config file is malformed. The workflow is always like this:
# This is a comment. You should use comments to describe
# parts of your configuration.
input {
...
}
filter {
...
}
output {
...
}
That is why you get the error message, your filter is in the wrong place and not applied before the input.
Multiple input sources
If you want to add information to your events depending on which input is used, you can add a type during input handling. Here is an example config file:
input {
file {
type => "file"
path => "/var/log/some_name.log"
}
udp{
type => "udp"
host => "127.0.0.1"
port => "10001"
}
}
filter {
# can be omitted, if not used
}
output {
udp{
host => "127.0.0.1"
port => "10001"
}
}
The type is stored as part of the event itself, so you can also use the type to search for it in Kibana.

Logstash for Vagrant: Address already in use

I have a Vagrant image in which there is an application; it is reachable in the Vagrant image if you call the port 2401 and depending on the service that you want, you call a specific address (i.e. "curl -X GET http://127.0.0.1:2401/provider/ipfix"). To retrieve the output outside the Vagrant machine I have set a port forwarding in the Vagrant file ("config.vm.network :forwarded_port, guest: 2401, host: 8080"), thus using the command "curl -X GET http://127.0.0.1:8080/provider/ipfix" from host I get the same output.
I am now on the phase of installing Logstash. My issue is that when I run Logstash with the config file I get the error "Address already in use". I tried to use also fields to guide to the specific output. Below is my Logstash config file. What workaround would you suggest?
input {
tcp {
host => localhost
port => 8080
add_field => {
"field1" => "provider"
"field2" => "ipfix"
}
codec => netflow {
versions => [10]
target => ipfix
}
type => ipfix
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
index => "IPFIX-logstash-%{+YYYY.MM.dd}"
}
}
If I'm reading this right, you're expecting Logstash to use TCP to connect to localhost:8080 to fetch information that it will then process.
That's not what this input does. This creates a listener on 127.0.0.1:8080, so the error message about 'already in use' is quite correct.
Considering you're using curl as an example of fetching this data, I suggest the http_poller plugin is better for what you want.
input {
http_poller {
urls => {
IPFIX => "http://127.0.0.1:8080/provider/ipfix"
}
request_timeout => 30
schedule => { "every" => "5s" }
add_tags => [ 'ipfix' ]
}
}
This will hit the known-working CURL URL every 5 seconds with a GET request.

vagrant puppet, composer unable to install laravel

I just recently knew about puppet and I'm trying to install laravel while vagrant is provisioning so that everything is already set (can be able to run laravel) when I logged in/ssh to vagrant. But I got stuck, it returned successfully executed but after I do vagrant ssh, laravel command is not available.
php5, php5-cli, etc. composer and other dependencies is already installed before this part of code.
class laravel {
Exec {
path => "/bin:/sbin:/usr/bin:/usr/sbin",
}
exec { "install-laravel" :
command => "/usr/local/bin/composer global require 'laravel/installer'",
require => [Package["php5-cli", "php5-dev"], Exec["install-composer", "set-composer-as-global"]],
cwd => "/home/vagrant/",
environment => ["COMPOSER_HOME=/home/vagrant"],
user => root,
group => root,
}
exec { "add-laravel-command" :
command => "mkdir /usr/local/bin/laravel",
environment => ["LARAVEL_HOME=/home/vagrant"],
onlyif => "test -d /usr/local/bin/composer",
require => Exec["install-laravel"],
user => root,
}
exec { "set-laravel-as-globall" :
command => "mv /home/vagrant/.composer/vendor/bin /usr/local/bin/laravel",
onlyif => "test -d /.composer/vendor/bin",
require => Exec["add-laravel-command"],
user => root,
}
}
Output
==> default: Notice: /Stage[main]/Laravel/Exec[install-laravel]/returns: executed successfully
==> default: Debug: /Stage[main]/Laravel/Exec[install-laravel]: The container Class[Laravel] will propagate my refresh event
==> default: Debug: Exec[add-laravel-command](provider=posix): Executing check 'test -d /usr/local/bin/composer'
==> default: Debug: Executing 'test -d /usr/local/bin/composer'
==> default: Debug: Exec[set-laravel-as-globall](provider=posix): Executing check 'test -d /.composer/vendor/bin'
==> default: Debug: Executing 'test -d /.composer/vendor/bin'
==> default: Debug: Class[Laravel]: The container Stage[main] will propagate my refresh event
Any help would be much appreciated. Thanks
create a file under puppet/modules/laravel/files and name it composer.json with the following content
{
"require": {
"laravel/installer": "^1.3"
}
}
Laravel init.pp file
class laravel {
Exec {
path => "/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin",
user => root,
group => root,
}
file { "/usr/local/bin/laravel-devtools" :
ensure => directory,
owner => root,
group => root,
}
file { "/usr/local/bin/laravel-devtools/composer.json" :
source => "puppet:///modules/laravel/composer.json",
require => File["/usr/local/bin/laravel-devtools"],
owner => root,
group => root,
}
exec { "install-laravel" :
command => "sudo composer require 'laravel/installer'",
onlyif => "test -f /usr/local/bin/composer",
require => [
Package["nginx","php5-cli", "php5-dev", "php5-mysql"],
File["/usr/local/bin/laravel-devtools", "/usr/local/bin/laravel-devtools"],
],
environment => ["COMPOSER_HOME=/home/vagrant"],
cwd => "/usr/local/bin/laravel-devtools",
user => root,
group => root,
}
exec { "set-laravel-as-global" :
command => "sudo ln -s /usr/local/bin/laravel-devtools/vendor/laravel/installer/laravel /usr/local/bin/laravel",
require => [
File["/usr/local/bin/laravel-devtools", "/usr/local/bin/laravel-devtools/composer.json"],
Exec["install-laravel"],
],
}
}
Make sure composer is installed first during provision.

Tomact7 chef cookbook ssl problems

I'm trying to set up a chef recipe for automatic deployment of my app over ssl with tomcat chef cookbook.
It works fine without ssl, but when I try to set the attributes for ssl support I'm getting error:
undefined method `truststore_password' for Custom resource tomcat_instance from cookbook tomcat.
My role:
name "myapp"
override_attributes ({
"java" => {
"jdk_version"=> "6"
},
"oracle" => {
"accept_oracle_download_terms" => true
},
"tomcat" => {
"base_version" => 7,
"java_options" => "${JAVA_OPTS} -Xmx128M -Djava.awt.headless=true",
"secure" => true,
"client_auth" => true,
"scheme" => "https",
"ssl_enabled_protocols" => "TLSv1",
"keystore_password" => "mypass",
"truststore_password" => "mypass",
"ciphers" => "SSL_RSA_WITH_RC4_128_SHA",
"keystore_file" => "/etc/tomcat7/client.jks",
"truststore_file" => "/etc/tomcat7/cert.jks"
}
})
run_list "recipe[java]", "recipe[tomcat]"
Maybe I'm missing something, because I can't find any good tutorials on how to do this I'm also using chef-solo with vagrant.
If you look at the Tomcat cookbook documentation, you will see the following regarding the truststore_password attribute:
node['tomcat']['truststore_password'] - Generated by the secure_password method from the
openssl cookbook; if you are using Chef Solo,
set this attribute on the node
Perhaps this means that you can not set the attribute in your role definition whilst using Chef Solo, and you have to manually add it to the node attributes JSON file.

Resources