Current setup:
I have a project that builds with Vagrant/Chef (bunch of other tools, docker, bundler etc in play as well but this is the focus) Target is only Ubuntu 14.04 64 bit.
1) Vagrant sets up a ubuntu VM runs Chef with Berkshelf and all the other Ruby build goodies.
2) Chef runs through all the cookbooks:
presumably download any missing dependencies via aptitude, install packages via dpkg
pull/build source from git repos
initialize databases, probably set permissions and create files etc
There are some tools such as https://github.com/phusion/traveling-ruby that claim to effectively "freeze" a ruby application so you can ship it with an interpreter and all the dependencies/gems. This would be fine for a static application if not the last bit: running through cookbooks is actually an important step to deploying the application.
The deploy target will have no or limited bandwidth, is it possible to package a Chef build to locally contain all dependencies so that no remote download is necessary?
My idea so far is to run a clean build and make a copy of the chef cache folder and aptitude cache.
Related
So I am having some issues with vagrant. I had initially tried to report this as an issue on the vagrant github issue boards, but they kept closing the issues without responding to them. I guess they decided I wasn't worth their time, or they were just behaving unprofessionally. Anyway, Here is the problem: I use vagrant with virtualbox, and a new version of virtualbox was recently released that is, unfortunately, not compatible with the latest vagrant installation.
However, the people at hashicorp have already updated the source code so that it is compatible with the new version of virtualbox, but you have to build the vagrant executable from the source repo (instructions here). So I followed the instructions and vagrant is working just like it used to.....when the only command I need to run is vagrant up. I should also mention ahead of time that, in order to run the vagrant dev build, the current working directory needs to be the root of the source code repo and the dev build can only be run using the following command with ruby:
bundle exec vagrant
With that being said, I needed to update one of my custom boxes, so I built a vm in the updated version of virtualbox and ran the below command
bundle exec vagrant package --base go --vagrantfile ../../vagrant/vagrantfile
After an extended period of time, vagrant spat back out the following error
The executable 'bsdtar' Vagrant is trying to run was not found in the %PATH% variable. This is an `error. Please verify this software is installed and on the path.`
I should also note that I use a windows machine and that this error never occurred when using the installed version of vagrant. At this point, I had posted the issue on github to get some input from the devs, but they (very unprofessionally) decided to ignore my requests for help and close the issues without providing any response. I used the GNUwin32 project to make numerous unix commands available to my Windows environment and added the folder to my PATH environment variable. I then run the same command again to create my new box and it works!! So then I upload it to the vagrant cloud and attempt to update the vagrant box that is stored on my system by running the following command:
bundle exec vagrant box update
Then, after waiting for a while, vagrant then spat this error out at me:
The box failed to unpackage properly. Please verify that the box
file you're trying to add is not corrupted and that enough disk space
is available and then try again.
The output from attempting to unpackage (if any):
C:\gnuwin32\bin/bsdtar.EXE: invalid option -- s
Usage:
List: bsdtar.EXE -tf <archive-filename>
Extract: bsdtar.EXE -xf <archive-filename>
Create: bsdtar.EXE -cf <archive-filename> [filenames...]
Help: bsdtar.EXE --help
Another error, and still involving this bsdtar tool. It does not appear that anyone else is reporting the issue I am running into because I think they are just waiting for hashicorp to release the new official installation, but, just to give you a look into their priorities, the version of virtualbox that was released which no longer worked with vagrant was released back on December 10. It has been over a month since and there is still no updated release.
So, I am hoping that someone out there might be able to find out why I keep running into these errors when trying to use vagrant's dev build and provide a solution. If not, then maybe if someone else is able to reproduce the issue and report it to hashicorp, maybe they will listen to someone else.
If you are on Ubuntu 20.04 then bsdtar was removed. Try to install libarchive-tools package.
$ sudo apt-get install libarchive-tools
I figured it out. My original hypothesis was correct: since vagrant is a tool that was built primarily to be run on linux machines, then vagrant runs in windows, the installation includes a mingw environment with all of the dependencies vagrant needs to function and which the installed vagrant executable imports into the console session when run. This why the dev build kept failing: because it was not importing this mingw environment. So, in order to fix the issue, I first cloned the vagrant source code repo from github and followed the instructions I linked to above to build the executable from the source repo. I then copied all of the files in the source repo into the following folder:
<hashicorp install folder root>\Vagrant\embedded\gems\2.2.6\gems\vagrant-<version num>
So, for me, the destination directory is C:\HashiCorp\Vagrant\embedded\gems\2.2.6\gems\vagrant-2.2.6
This directory is identical to the source code repo, and copying the source code repo to the above folder replaces the installation version of vagrant with the dev build. After I did this, running the vagrant commands which had failed previously normally (as in, without using ruby or bundle) worked. I hope this helps someone else out there who Hashicorp has decided is not worth their time.
Our organization has not upgraded to Chef 13 or 14, so we have to pin all our cookbooks to version 12. This means pinning to chef-dk version 1.6.11.
I'm spinning a centos7 vm in Vagrant with a cookbook and have set the version, but it will only install the Latest of chefdk, which results in the machine getting Chef 14. I've added a dependency in metadata.rb of chef_version ~> 12, so the provision fails, as Chef 14 is installed but the cookbook demands 12.
I should mention that the VM is for cookbook dev, so i want the right version of chef on it.
What am i missing to get the right version installed?
Thanks.
recipes/default.rb:
node.default['chef_dk']['version'] = '1.6.11'
node.default['chef_dk']['global_shell_init'] = true
include_recipe 'chef-dk'
metadata.rb:
depends 'chef-dk'
chef_version '~> 12.0'
berksfile:
cookbook 'chef-dk'
The part that is failing is the "outer" Chef, the thing running the recipe, not the ChefDK install (it never gets that far). We don't generally recommend using Chef to install ChefDK because installing both the chef-client and ChefDK installers on the same machine can lead to confusion as there are overlapping command line tools. I would provision the dev VM using a simpler system, probably a bash script or similar. We also do provide chef/chefdk Docker images on Hub for this kind of thing. (also we don't recommend doing cookbook development inside a VM at all, but I would guess that ship has sailed for you)
I am trying to cache ruby gems onto a Jenkins slave. I have installed gemstash onto my linux virtualbox which runs the slave, but however, I am not sure if I am installing it in the right location.
Should I be installing it by logging into the Jenkins user in the terminal and installing it there? Because when I created the slave node, I didn't need to install Jenkins onto the box. The source I use for the gemfile is localhost:9292
EDIT:
And how can I check what packages gemstash has cached?
Checking if gemstash has cached packages can be done by following https://github.com/bundler/gemstash#bundling
Any help would be appreciated.
As the README says, have a look in ~/.gemstash:
You might wonder where the gems are stored. After running the commands above, you will find a new directory at ~/.gemstash. This directory holds all the cached and private gems. It also has a server log, the database, and configuration for Gemstash.
Can some guide me .. for installing Mule ESB(mule-standalone-3.3.1) in Ubuntu . I am unable to find any documentation for installing. i want to automate it through Chef.
It's can be as simple as downloading and unpacking the archive file from: http://dist.codehaus.org/mule/distributions/mule-standalone-3.3.1.zip
Note: You need jdk 6/7 installed first.
Here's a chef cookbook that does this: https://github.com/ryandcarter/mule-cookbook
And here's a Vagrant script for running the mule cookbook on ubuntu etc: https://github.com/ryandcarter/vagrant-mule
It is very simple.
Download and unpacking the archive file from: http://dist.codehaus.org/mule/distributions/mule-standalone-3.3.1.zip or whatever version you want to install.
put this unpack file to anywhere where you want like /opt/ or /usr/local/
put you mule application inside apps folder.
& go to bin directory and run ./mule start command. Now mule server is running. You can also check mule log inside log folder mule.log file
This is an old question, but in case there are others who are looking.
You want to install Mule as a Ubuntu Service, so that it restarts when The server restarts. There are a couple of basic steps to this
I detailed out instructions and installation files at my github repository
https://github.com/jamesontriplett/mule_linux_service_install
Steps in general:
Install a startup script in /etc/init.d
Install a startup parameter file in /etc/mule
Customize parameters in the wrapper.conf file in /conf/wrapper.conf
Install the license file onto the server if using enterprise
Add the startup script to the run levels.
To test, you want to reboot the linux service to make sure that it will come back after a reboot. If it doesn't you have a reliability issue.
I want to use Jenkins for creating RPM packages to deploy code and scripts onto a Linux redhat machine(s)
So the applications are a mix of technologies (no compiling needed) i just need to package up the applications deploy them to the correct location restart apache
Would anybody have some instructions on how to do these steps for a total Newbie:
Some questions:
Do i need to install jenkins on a local linux machine if im going to be creating RPM's that will be deployed on to linux redhat machine (i was hoping to install jenkins on windows)
Does anybody have a example of creating a package out of a local folder (no source control for the moment)
I want to just specify the directory of where to take the code from and specify where to deploy the code to on a machine the rpm is installed on
On the destination machine i want to run something like
yum -install mypackage-version12.rpm
and it will install the code/scripts to the specified directory and restart apache
i need an example of this also.
Thanks
You can install Jenkins on a different machine, but you generally must have a Jenkins "node", "slave", "agent" installed on a machine that can generate RPM packages.
Running each step of the RPM package setup is putting all the steps to build within Jenkins. It works much better if you extend your build system to build the RPM, and have Jenkins do what it does best, manage the build (schedule, etc), not micro-manage the build (do the steps).
Depending on what you currently have as your build system, this might include ant directives to setup the rpm build tree, copy in the .spec file, and a executable call to rpmbuild.
Jenkins can easily call a post-build task to do this, or you might want to configure a mini "fake" project that does the update, depending on tastes.
As an aside, for a yum command to work without using the --localinstall option, you will need to have a web server set up, the new RPM copied to the right folder on the web server, and the indexing files rebuilt (repobuild is the script to do so, if I recall correctly).
On the client machine (where the package will be installed), you will need to have a yum configuration that directs the client machine to include the web server as one of the known yum repositories.
Why not use an Docker images to build the RPM inside it though a dedicated stage ?
Your code needs to provide /rpm/SPEC files and inside the Docker (Jenkins) you can have a Jenkinsfile like :
mkdir -p ./rpm/BUILD && cd ./rpm/ && for f in ./SPECS/*; do rpmbuild --define \"_topdir \$(pwd)/\" --define \"_builddir \$(pwd)/BUILD\" -bb \$f;
And you are done.