Let say I provisioned my VM with few php modules and custom packages.
Then I removed modules / packages from config.yaml or exec-always shells.
When running reload or provision its not being removed.
Is it correct behaviour ?
What is correct vagrant command to reset provisions ?
Unfortunately PuPHPet's Puppet code does not currently support ensure flags.
ie you can only ever add packages, never remove them.
The reason is simple. To add a package, you simply add it to the config.yaml file. How would you remove it? Remove it from the config.yaml file? Then Puppet won't know anything about it because it's no longer listed.
Related
pyenv-virtualenv offers a nice way of activating the environment on the very instant of entering or leaving the directory which contains a .python-version text file which specifies the environment to activate. It works for the directory it is in and all directories contained in it.
The environment is deactivated once we change the directory to something above it. This allows to easily switch between projects or analyses using different python versions (just by changing the directories).
Is there a way of achieving the same behaviour with (ana)conda?
Edit: added bash tag, because - as far as I understand - pyenv achieves this by hooking a custom script into .bashrc (which allows it to monitor the directory changes). If there is no build-in way in conda, how to create a script which would make it possible?
As mentioned in my comment, this is currently not supported. There is however an open issue on conda's GitHub asking for this feature.
In the meantime you could use autoenv, a small tool that'll automatically run the code in a .env file when entering a directory and that in a .env.leave when leaving the directory (supports bash/zsh and a couple others).
A simple example taken from their readme which illustrates the feature quite nicely:
$ echo "echo 'whoa'" > project/.env
$ cd project
whoa
To load a conda environment your .env would simply look like this:
conda activate <my_env>
Note 1: Check out the Configuration section of their GitHub readme before you start using it.
Note 2: The author of autoenv actually suggests trying direnv instead. However I've never used it, so I can't comment on it.
From autoenv's readme:
you should probably use direnv instead. Simply put, it is higher quality software. But, autoenv is still great, too. Maybe try both? :)
I am experimenting with Puppet using Vagrant. I'm new to Puppet.
I'm installing modules in my Puppet manifest using the approach suggested at: Can I install puppet modules through puppet manifest?
My default.pp contains something like:
$dsesterojava = 'dsestero-java'
exec { 'dsestero-java':
command => "puppet module install ${dsesterojava}",
unless => "puppet module list | grep ${dsesterojava}",
path => ['/usr/bin', '/bin']
}
include java::java_7
I'm trying to import a module and then immediately use the classes defined in it.
Currently, I get:
Error: Could not find class java::java_7
If I comment out the include line and re-run it. The module installs. If I then removed the comment and run the provisioning again then it works.
There is some kind of "chicken and egg" situation here. Can I use a module in the same Puppet manifest that installs it?
How should I solve it?
No, you cannot do this. When your catalog is compiled, Puppet will search in the appropriate directories for all of the required code and data. Since the java module does not exist until catalog application, the compilation of a catalog (occurs prior to application) depending upon it will fail. You are absolutely dealing with a "chicken and egg" situation here. I highly recommend against using Puppet code to install Puppet code.
Alternatively, the recommended approach to install and manage your Puppet modules is to use one of these solutions:
librarian-puppet: http://librarian-puppet.com/
r10k: https://github.com/puppetlabs/r10k
code-manager (PE only): https://puppet.com/docs/pe/2017.3/code_management/code_mgr.html
These will also solve the problem for you within the Vagrant if you are using the agent provisioner and subscribing the Vagrant instance to a Puppet Master.
If you are using the apply provisioner inside of Vagrant, then you will need to go a different route. The simplest solution is to use the shell provisioner to install Puppet modules via module install after the Puppet installation (unless you are using a Vagrant box with Puppet baked in, in which case you are probably not installing Puppet on it). Alternatively, you could share a directory with the host where your modules are installed, or install the librarian-puppet or r10k gems onto the Vagrant box and then use them to install into the appropriate path. I can go into more detail on these upon request.
I have found this DNSimple ansible module:
http://docs.ansible.com/ansible/dnsimple_module.html
but can not find anywhere on that page to download and install it? How do I go about downloading and installing ansible modules like this. Thanks.
The accepted answer solved the questioner's problem but didn't address the broader scope of the question.
How to install an Ansible module? The documentation is currently vague as to how to achieve this simple requirement!
An excellent general guide to writing modules (I've no connection to the author) can be found here.
The quickest way is to simply have a folder called library/ in the same folder as your playbook. Inside this folder, place the python script for the Ansible Module. You should now have a corresponding task available to your playbook.
If you want to share your module across multiple projects, then you can add an entry to /etc/ansible/ansible.cfg pointing to a shared library location, eg:
library = /usr/share/ansible/library
The module itself is part of ansible since version 1.6 (as stated here). To use it, you need to have dnsimple on your host machine (also stated in the above description). Install it with sudo pip install dnsimple
It is important to know that base ansible modules are not installed by default on devel version, which is the default installed version when you build from source.
Only few modules are present for developpment purpose.
So when you'll run your playbook it'll complain about not found module with following error message
couldn't resolve module/action 'xxx'
If you have no choice but building for source, don't forget to checkout the stable branch to install all basic ansible modules!
When I try to use the Ansible's Composer module and paste the following task into my playbook.yml file I get an error.
playbook.yml
- name: Composer Install Site Dependencies
composer: command=install working_dir=/var/www/html
Error:
ERROR: composer is not a legal parameter in an Ansible task or handler
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
After some investigation I ran "anisble-doc --list" on the command line to see the available modules and "composer" is not listed. I am running Ansible version 1.5.4, do I have to add it separately?
As #user272735 indicated in the comments, this is an unreleased module- it's slated for the 1.6 release, which is under "active development". (admittedly it was originally slated for 1.4) You have a couple of options:
install ansible from the bleeding edge. See "running from source". (obviously, this is scary)
ninja-patch the file into your locally installed tree. (obviously, this is scary)
add the file into your local Ansible repo.
As "developing modules" says, a fourth option is to specify your library path via ANSIBLE_LIBRARY or --module-path. HOWEVER, this overrides your global library/module path. That isn't what you want to do unless you are providing every module.
adding into your repo
I'm assuming your repo is named "ansible" and is set up properly, like this:
ansible/
ansible/roles/
ansible/group_vars/
In that case, simply add a library directory at the top (the 'best practices' discusses this but not in the expected section):
ansible/
ansible/roles/
ansible/group_vars/
ansible/library/
Inside there, add the composer file in there. That makes its path/file the following:
ansible/library/composer
Note it is not composer.py or anything else. Also, it doesn't seem to need the +x bit, so no fussy worries there.
Once you do that, you can run Ansible commands as you'd expect. The composer module will simply be there.
I would like to have a fully configured ruby unix development environment using Vagrant configuration and provisioning. Ideally it would refer to a simple base box (e.g., precise32) and build the environment up through provisioning in such a way thait it will easily be repeatable for other team members, can be posted to github, and can be upgraded as new versions of the different technologies are available by just changing the provisioning. I have not found any full examples of this searching the web although [Rails Dev Box][1] has some useful ideas. Most of the dev environment examples (like Rails Dev Box) do not set up the guest dev environment because they assume dev will be done on the host using a shared file strategy - or if they do the configuration by hand and then save the box rather than provisioning it.
This also needs to work both in behind a proxy as well as with no proxy.
Here are the steps I am thinking will be required:
On the host:
install virtualbox, vagrant, vagrant proxyconf
On the guest, via Vagrantfile/provisioning:
use a base unix box (e.g., precise32)
optionally set proxy variables (if proxyconf plugin is installed and http_proxy env var is set)
provision everything else (puppet, chef, or shell script)
install various unix tools (apt-get install git, etc.expo ...)
set up bash environment
set up vim environment (pathogen plugin, ruby plugins, etc.)
install rvm
install ruby 1.9, 2.0, JRuby, Rubinius
installs and configures tmux
Ideally I could push this into github, it could be cloned, then cd to new directoy, and vagrant up to have a fully configured dev environment ...
Does anyone have any examples of doing this?
My preference for doing a task like this would be to use puppet as the provisioning step in your Vagrantfile.
With something like this, you can always get something thrown together quick and dirty by just doing all the steps in a shell provisioner... but I prefer the puppet and modules approach as I've found it easier to maintain, extend and to share with the team.
I've experimented with a couple of different ways of doing the provisioning with Ruby and rvm as you mentioned;
Theres the rvm puppet module by maestrodev which allows you to configure many of rvms core features: ruby versions, gemsets, gems and rvm wrappers. Typically to manage which puppet modules are included with a project I use the librarian-puppet gem which allows you to use a Puppetfile to specify the module and the version you require, much like bundler. This handles dependencies such as the stdlib and concat modules. This scenario requires external internet access to have been configured before provisioning so as to be able to download ruby and rubygems.
Offline installation of rvm - I made the relevant files (rvm itself, ruby and rubygems) accessible to the vagrant machine using a shared folder and turned the offline rvm instructions into a (not very good) puppet module and used that. One particular gotcha to pay attention to here is the naming of the ruby source that gets installed; the extension has to be .tar.bz2, its described in the list.
Additionally for your other provisioning steps you can build up puppet modules yourself for your additional requirements: vim / tmux etc and keep this versioned separately in git. You can get pretty far with modules with just the 'puppet trifecta':
class vim {
package { 'vim':
ensure => installed,
}
file { '.vimrc':
ensure => file,
...
}
}
Additionally check out the puppet forge for modules which might have already been written to do what you want.
So heres an example of what you could check in:
/ Puppetfile
/ README.md
/ Vagrantfile
/ puppet
/manifests
site.pp
And the vagrant provisioner would be
Vagrant.configure("2") do |config|
config.vm.provision "puppet" do |puppet|
puppet.manifests_path = "puppet/manifests"
puppet.module_path = "puppet/modules"
puppet.manifest_file = "site.pp"
end
end
I've used a rake task before to use librarian-puppet to pull in puppet dependencies from git / puppet forge and any additional steps you might need to do before vagranting up. This way the code as configuration is all you check in.
Finally, with puppet you can use the facter and hiera tools which are very useful for keeping data out of your modules and worth having a look at as a means of refactoring once you have your initial setup working.