Error creating a database with Chef using database cookbook - vagrant

I'm provisioning a Vagrant bento/centos6.7 box using chef_solo provisioner. I'm using berkshelf-plugin for the cookbook dependencies.
My project folder looks like this:
|── Vagrantfile
|── cookbooks
└── my_cookbook
|── Berksfile
|── metadata.rb
...
Inside my Vagrantfile (which is the default for bento/centos6.7 plus the following config)
config.berkshelf.enabled = true
config.berkshelf.berksfile_path = "cookbooks/my_cookbook/Berksfile"
config.vm.provision "chef_solo" do |chef|
chef.add_recipe "my_cookbook"
end
In my metadata.rb
depends 'mysql2_chef_gem', '~> 1.1'
depends 'database', '~> 5.1'
When I provision my vagrant machine, I get the following error:
Error executing action `create` on resource 'mysql_database[my_database]'
Mysql2::Error
-------------
Lost connection to MySQL server at 'reading initial communication packet', system error: 110
PS: it works perfectly on bento/centos7.2
EDIT: Here is the database creation part:
# Install the MySQL client
mysql_client 'default' do
action :create
end
# Configure the MySQL service
mysql_service 'default' do
initial_root_password node['webserver']['database']['root_password']
action [:create, :start]
end
# Install the mysql2 Ruby gem
mysql2_chef_gem 'default' do
action :install
end
mysql_database node['webserver']['database']['db_name'] do
connection(
:host => node['webserver']['database']['host'],
:username => node['webserver']['database']['root_username'],
:password => node['webserver']['database']['root_password']
)
action :create
end
EDIT 2: It doesn't really work on bento/centos7.2. It doesn't crash, but MySQL seems to be dead and running sudo systemctl start mysqld hangs.

I was over suggestion by the fact that I was using many new things (I'm also new to Chef) so I thought the problem came from a different source (vagrant, bad bersfile integration, something else).
Turns out I just didn't read the docs which clearly states:
Logging into the machine and typing mysql with no extra arguments will fail. You need to explicitly connect over the socket with mysql -S /var/run/mysql-foo/mysqld.sock, or over the network with mysql -h 127.0.0.1

Related

Multi-machine Vagrant with Ubuntu - Sinatra & PostgreSQL

I'm stuck again. I need to provision a multi-machine Environment - one VM for a Sinatra app and a second for its PostgreSQL DB.
So far, I've managed to get the Sinatra app up and running in the ubuntu/xenial64 box but the provisioning "breaks" when it hits the configuration for the DB
Vagrant.configure("2") do |config|
config.vm.define "app" do |app|
# Use ubuntu/xenial64 as the virtual machine
app.vm.box = "ubuntu/xenial64"
# Use a private network to connect the VM to the local machine via an IP with an alias
app.vm.network "private_network", ip: "192.168.10.100"
app.hostsupdater.aliases = ["development.local"]
# sync the 'app' directory in the local directory to '/app' on the VM
app.vm.synced_folder "app", "/app"
# Use the provisioning script in envirnonment to provision the VM for a Sinatra environment
app.vm.provision "shell", path: "environment/app/provision.sh"
app.vm.provision "shell", inline: set_env({ DATABASE_URL: "postgresql://myapp:dbpass#localhost:15432/myapp" })
end
config.vm.define "db" do |db|
db.vm.box = "ubuntu/trusty64"
db.vm.host_name = "postgresql"
db.vm.network "private_network", ip: "10.0.2.15"
# db.vm.forward_port 8000, 8000
db.hostsupdater.aliases = ["database.local"]
# db.vm.share_folder "home", "/home/vagrant", ".", :create => true
db.vm.provision "shell", path: "environment/db/provision.sh", privileged: false
end
end
As you've probably guessed, I'm running an external provisioning script for the PG setup. The odd thing is I'm using the script recommended from Postgres' own site here.
In a separate location, I've git cloned that repo and followed the instructions and it works absolutely fine, creating a properly provisioned VM with PG installed.
However, I want to run a single vagrant up command and provisioning both the app and db correctly and have them speak to each other.
I'm (quite clearly) new to provisioning and DevOps as a whole, so would really appreciate some help.
I've uploaded my hilariously broken code here for you kind souls to look over if you feel so inclined.
Vagrant documentation on Multi-machines is quite thin and Google isn't being much help
Thanks!

Issue configuring puppetlabs/apache module with Vagrant

I started using Vagrant and Puppet recently, and i am having bit of difficulty getting puppet to work.
With puppet i want to change apache user and group to vagrant to solve permission issue when sharing the folder.
I want to do it by using following puppet config
class { "apache":
user => "vagrant",
group => "vagrant",
}
Reference : http://ryansechrest.com/2014/04/unable-set-permissions-within-shared-folder-using-vagrant-virtualbox/
For this i installed puppet on my host and guest machine, on host machine i added following cofig in Vagrantfile
config.vm.provision :puppet do |puppet|
puppet.manifests_path = 'puppet/manifests'
puppet.module_path = 'puppet/modules'
end
And created the file puppet/manifests/default.pp on host machine with following content
node 'node1' {
include apache
class { "apache":
user => "vagrant",
group => "vagrant",
}
}
When i run vagrant provision, i get the following error
==> default: Error: Could not find default node or by name with 'localhost' on node localhost
==> default: Error: Could not find default node or by name with 'localhost' on node localhost
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Where am i going wrong?
Just keep it simple:
For this i installed puppet on my host and guest machine,
you only need puppet to be installed on your guest machine, you can keep your host clean
you reference and define puppet/manifests/default.pp which is fine, just remove the node part
Package {
allow_virtual => true,
}
class { "apache":
user => "vagrant",
group => "vagrant",
}
include apache
can you confirm you have an apache module in your host puppet/modules or installed on the guest - you can have provision to run something like
#!/bin/bash
mkdir -p /etc/puppet/modules;
if [ ! -d /etc/puppet/modules/puppetlabs-apache ]; then
puppet module install puppetlabs-apache
fi
assuming you talk about this apache module, else replace with the module you're using if it comes from the forge

Chef Client Server Authentication through Vagrant

I'm trying to provision a Vagrant instance with Chef by connecting to a Chef server.
My Vagrantfile looks similar to this:
Vagrant.configure("2") do |config|
config.vm.box = "opentable/win-2012r2-standard-amd64-nocm"
config.vm.provider = "virtualbox"
config.vm.communicator = "winrm"
config.vm.guest = :windows
config.omnibus.chef_version = :latest
config.berkshelf.enabled = true
config.vm.provision :chef_client do |chef|
chef.chef_server_url = "https://chef.website.com"
chef.validation_key_path = "C:/chef/validation.pem"
chef.run_list = ["recipe[cookbook::default]"]
end
end
According to this documentation, this Vagrantfile should contain all the fields I need to connect to the Chef server and pull down the relevant cookbooks and databags.
However, I am running into the following error when running the Chef client provisioner:
The following berks command failed to execute:
C:\opscode\chefdk\embedded\bin/berks.BAT upload --berksfile C:/path/to/cookbook/Berksfile --no-freeze --force
The stdout and stderr are shown below:
stdout: E, [2015-08-10T17:02:57.654352 #8488] ERROR -- : Ridley::Errors::ClientKeyNotFoundOrInvalid: client key is not found or invalid or not found at: 'C:/chef/client.pem'
Indeed, the client.pem is not there.
While the error itself is pretty self explanatory, I don't understand why I need to specify a client.pem on the initial Chef client run.
Is there a way for Vagrant to create this itself? Better yet, can I make it so that it doesn't need a client identifier at all?
This is only a Vagrant instance, so I don't need to keep this node on the Chef server. Based on the Chef client provisioner documentation I don't see why I need a client.pem file, as the intructions make no mention of this.
Thanks for your help!
Try disabling Berkshelf
config.berkshelf.enabled = false
Bootstrapping a new Chef client against a chef server does not require Berkshelf. Berkshelf is a tool used on Chef workstations to manage cookbooks on the chef server.
PS
The Vagrant-berkshelf plugin at one stage was deprecated in favour of Test kitchen. It appears to be back again, but in the meantime I've grown to appreciate the benefits of Test Driven Infrastructure!

puppet looking for hiera.yaml in the wrong place

I want puppet to look for hiera.yaml in /etc but it's looking for it in /etc/puppet. I put a line into puppet.conf:
hiera_config = /etc/hiera.yaml
But still gives me the hiera.yaml update warning when I run the script.
I'm running the script from Vagrant 1.2.2. Using puppet 3.2.2
I'm running Centos 6.4 in a vm.
I found that the puppet provisioner in vagrant now support hiera_config_path which does exactly what is desired.
config.vm.provision :puppet do |puppet|
# path on host machine to hiera.yaml
puppet.hiera_config_path = '/Users/me/vms/hiera/hiera.yaml'
# This sets the relative path for hiera data directories
puppet.working_directory = '/Users/me/vms/hiera'
end
This is documented in Vagrant: Up and Running but I didn't find it until I started looking into the vagrant source to implement this feature myself.
Hmmm... On Vagrant 1.2.2 and Puppet 3.2.3, I am able to set hiera_config in puppet.conf without problems. I would double-check that you are editing /etc/puppet.conf on the Vagrant vm, not on the host machine, and that the hiera_config line is the [main] block, not just in the [master] block.
If both of those conditions are true and it is still not working, you might try explicitly setting hiera_config in your Vagrantfile:
config.vm.provision :puppet do |puppet|
...
puppet.options = '--hiera_config=/etc/hiera.yaml'
end
Good luck!
Puppet provisioning runs as root user, not vagrant, so that's why it doesn't take notice of your puppet.conf in /vagrant.
If you run puppet config print inside the vm from user vagrant and root you see ALL puppet config settings per user and compare.

Is there a way the to secure proxy user/passwords for Vagrant configs?

I am working through several use cases with Vagrant and have been having difficulty coming up with a good solution for handling corporate proxies in an elegant way. In my initial Vagrantfile, I ended up with this config for apt.conf
user = 'me'
pwd = 'mypwd'
config.vm.provision :shell, :inline => "echo 'Acquire::http::Proxy \"http://#{user}:#{pwd}#proxy.corp.com:3210\";' >> /etc/apt/apt.conf"
config.vm.provision :shell, :inline => "echo 'Acquire::https::Proxy \"http://#{user}:#{pwd}#proxy.corp.com:3210\";' >> /etc/apt/apt.conf"
config.vm.provision :shell, :inline => "echo 'Acquire::socks::Proxy \"http://#{user}:#{pwd}#proxy.corp.com:3128\";' >> /etc/apt/apt.conf"
Obviously, I want to avoid having my user/password stored in the Vagrantfile since I am planning on keeping it under version control. My next attempt was to prompt from within the Vagrantfile using the highline plugin, but that causes the prompt to appear during every vagrant command and not just during init (when this config would apply).
Am I going about this the wrong way? If so, what other options are available to deal with proxy configuration that fits well into the Vagrant model?
You could do this in the following way:
Create a file called proxy.yml and add it to your .gitignore so that it doesn't get committed.
Then inside your Vagrantfile you could have something like this:
if File.exist?("proxy.yml")
require 'yaml'
proxy = YAML::load(File.open('proxy.yml'))
config.vm.provision :shell, :inline => "echo 'Acquire::http::Proxy \"http://#{proxy['user']}:#{proxy['pass']}#proxy.corp.com:3210\";' >> /etc/apt/apt.conf"
end
The contents of proxy.yml would be:
user: "username"
pass: "password"
You can use vagrant-proxyconf plugin:
vagrant plugin install vagrant-proxyconf
As you probably want to use the same settings for all Vagrant VMs, you can put the configuration to ~/.vagrant.d/Vagrantfile (which is local to your machine):
config.apt_proxy.http = "http://me:mypwd#proxy.corp.com:3210"
Apt uses by default the same proxy with HTTPS URIs too, so you shouldn't need to specify it in your case.
Other option is to pass the configuration with environment variables. For example on command line, ~/.bashrc, etc.:
export VAGRANT_APT_HTTP_PROXY="http://me:mypwd#proxy.corp.com:3210"
The plugin can also configure proxies for the whole VM, not only for Apt.

Resources