I'm trying to use Netkit to test some of my C applications. In order to do so, I need to have gcc installed on my virtual machines. So I'm trying to install it following the instructions in the wiki. The second thing to do would be this:
Once vm has started, configure a name server inside its resolv.conf file
Here's what I find inside /etc/resolv.conf of the virtual machine:
#domain local.domain.nam
#nameserver w.x.y.z
#search suffix.for.unqualified.names
What should I write there? How to configure a name server? I tried to copy the resolv.conf of my host but it doesn't work.
If I try to run apt-get update here's the output I get:
You can look at /etc/resolv.conf on the host and add the nameserver lines found there to the guest file. Or you can use third-party recursive name servers. Here are some publicly accessible servers:
Google
Verisign
Note that Debian's APT configuration apparently contains unstable/sid repositories (based on the …/unstable/… part in the URLs). If the VM image was created a long time ago, this will make updates and installing additional software very difficult because the unstable/sid has evolved considerably since then, and upgrades for historic unstable to current versions does not always work.
Related
I know that you can setup proxy in Ansible to provision behind corporate network:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_environment.html
like this:
environment:
http_proxy: http://proxy.example.com:8080
Unfortunately in my case there is no access to internet from the server at all. Downloading roles locally and putting them under /roles folder seems solve the role issue, but roles still download packages from the internet when using:
package:
name: package-name
state: present
I guess there is no way to make dry/pre run so Ansible downloads all the packages, then push that into repo and run Ansible provision using locally downloaded packages?
This isn't really a question about Ansible, as all Ansible is doing is running the relevant package management system on the target host (i.e. yum, dnf or apt or whatever). So it is a question of what solution the specific package management tool provides, for this case.
There are a variety of solutions and for example in the Centos/RHEL world you can:
Create a basic mirror
Install a complete enterprise management system
There is another class of tool generally called an artefact repository. These started out life as tools to store binaries built from code, but have added a bunch of features to act as a proxy and cache packages from a wide variety of sources (OS Packages, PIP, NodeJS, Docker, etc). Two examples that have limited free offerings:
Nexus
Artifactory
They of course still need to collect those packages from a source, so at some point those are going to have to be downloaded to placed within these systems.
Like clockworknet pointed out this is more related to the RHEL package handling. Setting up local mirror somewhere inside the closed network can provide a solution in this situation. More info on "How to create a local mirror of the latest update for Red Hat Enterprise Linux 5, 6, 7 without using Satellite server?": https://access.redhat.com/solutions/23016
My solution:
Install Sonatype Nexus 3
create one or more yum proxy repositories
https://help.sonatype.com/repomanager3/formats/yum-repositories
use Ansible to add these proxies via yum_repository
https://docs.ansible.com/ansible/latest/modules/yum_repository_module.html
yum_repository:
name: proxy-repo
description: internal proxy repo
baseurl: https://your-nexus.server/url-to-repo```
note: did that for APT and works fine, would expect the same for yum
First, I am on a Mac. Second, I have a virtualbox VM which was created using vagrant and which uses a shared folder to easily pass files back and forth, etc.
I would now like to clone this VM from a particular state so that I can upgrade an application on it and move forward with it. The issue is that the only way I know of to use shared folders here is to start the box using vagrant up (this makes sense as vagrant mounts the folders as part of its boot process); however, using vagrant up always triggers the original VM.
Is there a way to create a clone of a VM using Virtual Box and then to be able to use shared folders so I can easily copy files to and from the host and guest via ssh?
Did some more researching and found that I can mount a shared folder in a clone in the same way I can with the original virtualbox VM using:
mount -t vboxsf -o rw,uid=33,gid=33 <shared_folder_name> <guest_folder>
Note that the uid and gid specified here relate only to Debian-based systems. CentOS IDs are different.
For more on the technique and for solutions for CentOS boxes, see here: http://jimmybonney.com/articles/configure_virtualbox_shared_folder_apache_virtual_host/
I've tried the steps in the above article to allow for auto mounting the shared folder when the VM boots, but I've had no success. As a work-around (which I find acceptable for now), I created an alias in my .bashrc file which seems to work fine.
I would now like to clone this VM from a particular state so that I can upgrade an application on it and move forward with it
one thing you can look is vagrant snapshot
Snapshot works with VirtualBox provider to take a snapshot of your VM at the particular point of time when snapshot is taken. You can then continue working on your VM and when needed you can easily recover from a previous snapshot
Current Steup
Using a Vagrant/Virtualbox image for development
Vagrant file and php code are both checked into a git repo
When a new user joins the project they pull down the git repo and type vagrant up
When we deploy to our "dev production" server we are on a CentOS 7 machine that has virtual box and vagrant and we just run the vagrant image
Future Setup
We are moving towards an OpenStack "cloud" and are wondering how to best integrate this current setup into the workflow
As I understand it OpenStack allows you to create individual VMs - which sounds cool because on one hand we could then launch our VM's, but the problem is we are taking advantage of Vagrant/Virtual Box's "mapping" functionality so that we are mounting /var/www/html to a /html directory in the folder we run vagrant out of. I assume this is not possible with OpenStack - and was wondering whether there is a specified best practice for how to handle this situation.
Approach
The only approach i can think of is to:
Install a VM on OpenStack that runs Centos7 and then inside that VM run Vagrant/VirtualBox (this seems bonkers)
But then we have VM inside a VM inside a VM and that just doesn't seem efficient.
Is there a tool - or guide - or guidance how to work with both a local vagrant image and the cloud? It seems like there may not be as easy a mapping as I initially though.
Thanks
It sounds like you want to keep using vagrant, presumably using https://github.com/ggiamarchi/vagrant-openstack-provider or similar? With that assumption, the way to do this which is probably the smallest iteration from your current setup is just to use an rsync synced folder - see https://www.vagrantup.com/docs/synced-folders/rsync.html. You should be able to something like this:
config.vm.synced_folder "html/", "/var/www/html", type: 'rsync'
Check the rest of that rsync page though -- depending on your ssh user, you might need to use the --rsync-path option.
NB - you don't mention whether you vagrant host is running windows or linux etc. If you're on windows then I tend to use cygwin, though I expect you can otherwise find some rsync.exe to use.
If you can free yourself from the vagrant pre-requisite then there are many solutions, but the above should be a quick win from where you are now.
Different branches and versions of my codebase have different dependencies, e.g. master branch might be on Ruby 1.9 and use Rails 4, but some release branch might be on Ruby 1.8 and use Rails 3. I imagine this is a common problem, but I haven't really seen much about it.
Is there a clean way to detect/re-provision the Vagrant VM based on the current branch (or maybe Gemfile)?
Currently I have it set up to "bundle install" and other stuff in the provisioner, which sorta works but it still clutters the VM enviroment with the dependencies for each branch I've ever been on.
Why don't you add the Vagrantfile and the provisioning scripts to the repository? Since they are part of the branch then, they can look like you wish.
If you often switch between branches this might not suite anymore since you would need to re-provision the vm every time you change the branch. In that case I would suggest to setup multiple vms in the Vagrantfile and add provisioning scripts for all the vms in parallel. Then you could do something like
vagrant up ruby1.8
vagrant up ruby1.9
... and so on.
In the spirit of hek2mgl answer but I would use different VM repository rather than multiple VM from the same Vagrantfile
For example I sometimes do the following :
have the VM on an external hard drive but the project on my local drive
I have a shell script workhere.sh which set the corresponding VM to use
#!/usr/bin/env bash
export VAGRANT_CWD="/Volumes/VMDrive/..."/;
If you commit the script to your git repo, all you need to do after you checkout a branch is to do source workhere.sh and it'll point you to the correct VM so you can have multiple VM running in parallel
In the case you switch first time, you'll need to boot the VM and after ssh, but in the case you switch multiple times, the VM will be up and by indicating which VM you need to point too, vagrant ssh would connect to the correct VM.
If you leave VM up and running you would need to be cautious about IP (if you use fixed IP) and port forwarding as you could have conflict.
I'm looking for some best practices on how to increase my productivity when writing new puppet modules. My workflow looks like this right now:
vagrant up
Make changes/fixes
vagrant provision
Find mistakes/errors, GOTO 2
After I get through all the mistakes/errors I do:
vagrant destroy
vagrant up
Make sure everything is working
commit my changes
This is too slow... how can i make this workflow faster?
I am in denial about writing tests for puppet. What are my other options?
cache your apt/yum repository on your host with the vagrant-cachier plugin
use profile –evaltrace to find where you loose time on full provisioning
use package base distribution :
eg: rvm install ruby-2.0.0 vs a pre-compiled ruby package created with fpm
avoid a "wget the internet and compile" approach
this will probably make your provisioning more reproducible and speedier.
don't code modules
try reusing some from the forge/github/...
note that it can be against my previous advice
if this is an option, upgrade your puppet/ruby version
iterate and prevent full provisioning
vagrant up
vagrant provision
modify manifest/modules
vagrant provision
modify manifest/modules
vagrant provision
vagrant destroy
vagrant up
launch server-spec
minimize typed command
launch command as you modify your files
you can perhaps setup guard to launch lint/test/spec/provision as you save
you can also send notifications from guest to host machine with vagrant-notify
test without actually provisioning in vagrant
rspec puppet (ideal when refactoring modules)
test your provisioning instead of manual checking
stop vagrant ssh-ing checking if service is running or a config has a given value
launch server-spec
take a look at Beaker
delegate running the test to your preferred ci server (jenkins, travis-ci,...)
if you are a bit fustrated by puppet... take a look at ansible
easy to setup (no ruby to install/compile)
you can select portion of stuff you want to run with tags
you can share the playbooks via synched folders and run ansible in the vagrant box locally (no librairian-puppet to launch)
update : after discussion with #garethr, take a look at his last presentation about guard.
I recommand using language-puppet. It comes with a command line tool (puppetresources) that can compute catalogs on your computer and let you examine them. It has a few useful features that can't be found in Puppet :
It is really fast (6 times faster on a single catalog, something like 50 times on many catalogs)
It tracks where each resource was defined, and what was the "class stack" at that point, which is really handy when you have duplicate resources
It automatically checks that the files you refer to exist
It is stricter than Puppet (breaks on undefined variables for example)
It let you print to standard output the content of any file, which is useful for developing complex templates
The only caveat is that it only works with "modern" Puppet practices. For example, require is not implemented. It also only works on Linux.