How can I tell what Vagrantfile/directory is responsible for managing a given VM image in Virtualbox? I think I might have deleted projects without doing a vagrant destroy first.
I am considering you are using a Unix based OS, vagrant 1.9.6 and VirtualBox 5, because there could have some differences in other conditions.
The vagrant up command will generates a .vagrant/ folder inside the current path, there will be created some files with metadata which points to other path on your user $HOME, usually ~/.vagrant.d/.
For instance, if you execute vagrant up inside the ~/example-vm/:
Vagrant will generate the following files inside the following path:
~/example-vm/.vagrant/
├── machines
│ └── default
│ └── virtualbox
│ ├── action_provision
│ ├── action_set_name
│ ├── creator_uid
│ ├── id
│ ├── index_uuid
│ ├── private_key
│ ├── synced_folders
│ └── vagrant_cwd
└── provisioners
└── ansible
└── inventory
└── vagrant_ansible_inventory
Vagrant will generate the following files inside the path:
~/.vagrant.d/boxes/example-vm/
└── 0
└── virtualbox
├── box-disk1.vmdk
├── box-disk2.vmdk
├── box.ovf
├── include
│ └── _Vagrantfile
├── metadata.json
└── Vagrantfile
Then vagrant will insert on the file ~/.vagrant.d/data/machine-index/index some metadata of your VM (~/example-vm/.vagrant/machines/default/virtualbox/id and index_uuid) in order to instruct where vagrant and your provider should interact to the VM which in this case would be the .vmdk and .ovf files.
You should just edit the files .vmdk and .ovf files.
Related
I use Ansible to deploy my userspecific configuration (shell, texteditor, etc.) on a newly installed system. That's why i have all config files in my roles file directory, structured the same way as they should be placed in my home directory.
What's the correct way to realize this? I don't want to list every single file in the role and exisiting files should be overwriten, existing directories should be merged.
I've tried the copy module, but the whole task is skipped; I assume because the parent directory(.config) already exist.
Edit: add the requested additional information
Ansible Version: 2.9.9
The roles copy task:
- name: Install user configurations
copy:
src: "home/"
dest: "{{ ansible_env.HOME }}"
The Files to copy in the role directory:
desktop-enviroment
├── defaults
│ └── main.yml
├── files
│ └── home
│ ├── .config
│ │ ├── autostart-scripts
│ │ │ └── ssh-keys.sh
│ │ ├── MusicBrainz
│ │ │ ├── Picard
│ │ │ ├── Picard.conf
│ │ │ └── Picard.ini
│ │ ├── sublime-text-3
│ │ │ ├── Installed Packages
│ │ │ ├── Lib
│ │ │ ├── Local
│ │ │ └── Packages
│ │ └── yakuakerc
│ └── .local
│ └── share
│ ├── plasma
│ └── yakuake
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── tasks
│ ├── desktop-common.yaml
│ ├── desktop-gnome.yaml
│ ├── desktop-kde.yaml
│ └── main.yml
├── templates
└── vars
└── main.yml
The relevant ansible output:
TASK [desktop-enviroment : Install user configurations] **
ok: [localhost]
I have a folder structure like this in Ansible where global variables are at the root group_vars and then environment specific variables are in inventories/dev/group_vars/all etc.
.
├── ansible.cfg
├── group_vars
│ └── all
├── inventories
│ ├── dev
│ │ ├── group_vars
│ │ │ └── all
│ │ └── hosts
│ └── prod
│ ├── group_vars
│ │ └── all
│ └── hosts
└── playbook.yml
I want to use to be able reuse the existing variables in both var files in Molecule but unable to do so as it cannot find the variable. Something similar to the below works but I need both group_vars/all and inventories/dev/group_vars/all
extract of my molecule.yml
provisioner:
name: ansible
inventory:
links:
group_vars: ../../../group_vars
I tried comma separated and that doesn't work because afterall it's just a symlink to the file.
Here is my directory structure,
├── README.md
├── internal-api.retry
├── internal-api.yaml
├── ec2.py
├── environments
│ ├── alpha
│ │ ├── group_vars
│ │ │ ├── alpha.yaml
│ │ │ ├── internal-api.yaml
│ │ ├── host_vars
│ │ ├── internal_ec2.ini
│ ├── prod
│ │ ├── group_vars
│ | │ ├── prod.yaml
│ │ │ ├── internal-api.yaml
│ │ │ ├── tag_Name_prod-internal-api-3.yml
│ │ ├── host_vars
│ │ ├── internal_ec2.ini
│ └── stage
│ ├── group_vars
│ │ ├── internal-api.yaml
│ │ ├── stage.yaml
│ ├── host_vars
│ │ ├── internal_ec2.ini
├── roles
│ ├── internal-api
├── roles.yaml
I am using separate config for an ec2 instance with tag Name = prod-internal-api-3, so I have defined a separate file, tag_Name_prod-internal-api-3.yaml in environments/prod/group_vars/ folder.
Here is my tag_Name_prod-internal-api-3.yaml,
---
internal_api_gunicorn_worker_type: gevent
Here is my main playbook, internal-api.yaml
- hosts: all
any_errors_fatal: true
vars_files:
- "environments/{{env}}/group_vars/{{env}}.yaml" # this has the ssh key,users config according to environments
- "environments/{{env}}/group_vars/internal-api.yaml"
become: yes
roles:
- internal-api
For prod deployemnts, I do export EC2_INI_PATH=environment/prod/internal_ec2.ini, likewise for stage and alpha. In environment/prod/internal_ec2.ini I have added instance filter, instance_filters = tag:Name=prod-internal-api-3
When I run my playbook,
I get this error,
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'internal_api_gunicorn_worker_type' is undefined"}
It means that it is not able to pick variable from the file tag_Name_prod-internal-api-3.yaml. Why is it happening? Do I need to manually add it in include_vars(I don't think that should be the case)?
Okay, so it is really weird, like really really weird. I don't know whether it has been documented or not(please provide link if it is).
If your tag Name is like prod-my-api-1, then the file name tag_Name_prod-my-api-1 will not work.
Your filename has to be tag_Name_prod_my_api_1. Yeah, thanks ansible for making me cry for 2 days.
In Drupal 7 I use
drush-patchfile
to automatically implements patches when installing/updating module via drush. But in DDEV I don't know how to extend existing drush with drush-patchfile
As you can see on https://bitbucket.org/davereid/drush-patchfile section Installation, I need to clone the repository into
~/.drush
directory and that will append it to existing drush.
On another project without DDEV, I've already done that with creating new docker image file
FROM wodby/drupal-php:7.1
USER root
RUN mkdir -p /home/www-data/.drush && chown -R www-data:www-data /home/www-data/;
RUN cd /home/www-data/.drush && git clone https://bitbucket.org/davereid/drush-patchfile.git \
&& echo "<?php \$options['patch-file'] = '/home/www-data/patches/patches.make';" \
> /home/www-data/.drush/drushrc.php;
USER wodby
But I'm not sure how to do that in DDEV container.
Do I need to create a new service based on drud/ddev-webserver or something else?
I've read documentation but not sure in what direction to go.
Based on #rfay comment, here solution that works for me (and with little modification can works for other projects).
I've cloned repo outside of docker container; for example, I've cloned into
$PROJECT_ROOT/docker/drush-patchfile
Create custom drushrc.php in the $PROJECT_ROOT/.esenca/patches folder (you can choose different folder)
<?php
# Location to the patch.make file. This should be location within docker container
$options['patch-file'] = '/var/www/html/.esenca/patches/patches.make';
Add following hooks into $PROJECT_ROOT/.ddev/config.yaml
hooks:
post-start:
# Copy drush-patchfile directory into /home/.drush
- exec: "ln -s -t /home/.drush/ /var/www/html/docker/drush-patchfile"
# Copy custom drushrc file.
- exec: "ln -s -t /home/.drush/ /var/www/html/.esenca/patches/drushrc.php"
Final project structure should looks like
.
├── .ddev
│ ├── config.yaml
│ ├── docker-compose.yaml
│ ├── .gitignore
│ └── import-db
├── docker
│ ├── drush-patchfile
│ │ ├── composer.json
│ │ ├── patchfile.drush.inc
│ │ ├── README.md
│ │ └── src
├── .esenca
│ └── patches
│ ├── drushrc.php
│ └── patches.make
├── public_html
│ ├── authorize.php
│ ├── CHANGELOG.txt
│ ├── COPYRIGHT.txt
│ ├── cron.php
│ ├── includes
│ ├── index.html
│ ├── index.php
│ ├── INSTALL.mysql.txt
│ ├── INSTALL.pgsql.txt
│ ├── install.php
│ ├── INSTALL.sqlite.txt
│ ├── INSTALL.txt
│ ├── LICENSE.txt
│ ├── MAINTAINERS.txt
│ ├── misc
│ ├── modules
│ ├── profiles
│ ├── README.txt
│ ├── robots.txt
│ ├── scripts
│ ├── sites
│ │ ├── all
│ │ ├── default
│ │ ├── example.sites.php
│ │ └── README.txt
│ ├── themes
│ ├── Under-Construction.gif
│ ├── update.php
│ ├── UPGRADE.txt
│ ├── web.config
│ └── xmlrpc.php
└── README.md
At the end start ddev envronment
ddev start
and now you can use drush-patchfile commands within web docker container.
You can ddev ssh and then sudo chown -R $(id -u) ~/.drush/ and then do whwatever you want in that directory (~/.drush is /home/.drush).
When you get it going and you want to do it repetitively for every start, you can encode the instructions you need using post-start hooks: https://ddev.readthedocs.io/en/latest/users/extending-commands/
Please follow up with the exact recipe you use, as it may help others. Thanks!
Previously I had a similar configuration to this working but as soon as I added hiera to my puppet build I started having problems. The error I currently have after running vagrant provision is as follows:
==> default: [vagrant-hostsupdater] Checking for host entries
==> default: [vagrant-hostsupdater] found entry for: 192.168.33.10 local.mysite
==> default: Configuring cache buckets...
==> default: Running provisioner: puppet...
==> default: Running Puppet with app.pp...
==> default: stdin: is not a tty
==> default: Error: Could not find class nodejs for local.mysite on node local.mysite
==> default: Error: Could not find class nodejs for local.mysite on node local.mysite
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
My vagrant config is:
# -*- mode: ruby -*-
# vi: set ft=ruby :
require "yaml"
# Load yaml configuration
config_file = "#{File.dirname(__FILE__)}/config/vm_config.yml"
default_config_file = "#{File.dirname(__FILE__)}/config/.vm_config_default.yml"
vm_external_config = YAML.load_file(config_file)
# Configure Vagrant
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
config.vm.network :private_network, ip: vm_external_config["ip"]
config.vm.hostname = vm_external_config["hostname"]
config.vm.network "forwarded_port", guest: vm_external_config["port"], host: 2368
config.vm.synced_folder vm_external_config["ghost_path"], "/var/www/mysite.com", :nfs => true
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", vm_external_config["memory"]]
end
config.cache.scope = :box
config.librarian_puppet.placeholder_filename = ".gitkeep"
config.vm.provision :puppet do |puppet|
puppet.hiera_config_path = "puppet/hiera/hiera.yaml"
puppet.manifests_path = "puppet/manifests"
puppet.manifest_file = "app.pp"
puppet.module_path = "puppet/modules"
puppet.facter = {
"environment" => ENV['ENV'] ? ENV['ENV'] : 'local'
}
end
end
My source tree looks like so (much of it isn't relevant aside from the folders structure for the custom blog module and hiera config):
├── Vagrantfile
├── config
│ └── vm_config.yml
└── puppet
├── Puppetfile
├── hiera
│ ├── common.yaml
│ ├── hiera.yaml
│ ├── local
│ │ └── site.yaml
│ └── production
│ └── site.yaml
├── manifests
│ └── app.pp
└── modules
├── blog
│ └── manifests
│ └── app.pp
├── ghost
│ └── manifests
│ └── app.pp
├── init.d
│ └── files
│ ├── WebhookServer
│ └── ghost
├── mailgunner
├── nginx
│ ├── files
│ │ ├── local
│ │ │ ├── mysite.com
│ │ │ └── mail.mysite.com
│ │ └── production
│ │ ├── mysite.com
│ │ └── mail.mysite.com
│ └── manifests
│ └── server.pp
├── tools
│ ├── files
│ │ ├── local
│ │ │ ├── backup.sh
│ │ │ ├── ghostsitemap.sh
│ │ │ └── init-mysite.sh
│ │ └── production
│ │ ├── backup.sh
│ │ ├── ghostsitemap.sh
│ │ └── init-mysite.sh
│ └── manifests
│ └── install.pp
└── webhooks
├── files
│ ├── local
│ │ └── init-webhook.sh
│ ├── production
│ │ └── init-webhook.sh
│ ├── webhook.sh
│ └── webhooks.rb
└── manifests
└── install.pp
hiera.yaml:
---
:backends:
- yaml
:yaml:
:datadir: /vagrant/hieradata
:hierarchy:
- "%{::environment}/site
- common
common.yaml
--
classes:
- site
local/site.yaml
--
:site:
environment: local
name: local.mysite
mailserver: local.mail.mysite
blog/manifests/app.pp
class blog::app {
class { 'nodejs':
version => 'v0.10.25',
} ->
package { 'pm2':
ensure => present,
provider => 'npm',
require => Class['nodejs'],
}
}
Puppetfile
forge 'https://forgeapi.puppetlabs.com'
mod 'willdurand/nodejs', '1.9.4'
Basically, my problem is that my puppet install is not reinstalling nodejs (I'd removed it previously using an rm -rf puppet/modules/nodejs)
Does anyone have any ideas how or why puppet is now refusing to install the nodejs puppet module in the puppet/modules directory?
FYI - I've installed the willdurand/nodejs module using puppet module install willdurand/nodejs
Any help is much appreciated - I've been banging my head against a brick wall on this for a few days now!
The Puppetfile is used by the vagrant-librarian-puppet to install your puppet module so it should install.
Make sure the plugin is installed
$ vagrant plugin list
vagrant-librarian-puppet (0.9.2)
....
If you dont see the plugin, make sure to install it
$ vagrant plugin install vagrant-librarian-puppet