Puppet to chef converter - converters

Is there any convertor which converts Puppet scripts to Chef?
I found ruby script which converts Chef scripts to Puppet https://github.com/relistan/chef2puppet but I need puppet2chef.

UPDATE: So, Blueprint is abandonware (last update in 2013). I think the answer to this question is, sadly, now: No.
So, I do not believe there is a 'simple script' way available yet to do this conversion. What I've done in testing is to use Blueprint to do the following:
Install a fresh node with Puppet for the node type you wish to "convert".
Allow Blueprint to scan the resulting server.
Use Blueprint to export either Chef or Puppet code.
Repeat for each node type you have defined in Puppet (or Chef, if you go the other direction).
From the Blueprint README.md:
Blueprint reverse-engineers servers.
Easy configuration management.
Detect relevant packages, files, and source installs.
Generate reusable server configs.
Convert blueprints to Puppet or Chef or CFEngine 3.
No DSLs, no extra servers, no workflow changes.
I hope I'm proven wrong, and there is a soup-to-nuts script to just "convert" from Puppet to Chef. I'd use it in a heartbeat! :) This method will at least get you started.

Related

Chef Kitchen CI Verify without converge. Existing Server test

I have a situation where I want to test AWS EC2 sever using the Kitchen test framework. We are using cloudformation for our infrastructure creation and not the Chef. I want to use Kitchen Verify functionality by writing the test cases, but can't use Chef recipes for infrastructure creation.
Is there any way, I can just use Kitchen Verify command against existing EC2 infrastructure created by CloudFormation? How do I specify address on existing server which is not created using Kitchen Converge command.
Appreciate your help!
KitchenCI is only a tool (a powerful one, no doubt! :-)) which connect other tools/drivers (provisioners, verifiers, etc).
Since you do not use it for provisioning your test infrastructure it makes a little to no sense to use it for verification. Instead, I would suggest a research if your preferred verifier (you didn't mention which one you are using) can be used standalone. For example, you can run inspec without Kitchen (look for backend/host flags).
There is a drive plugin for Cloudformation, which includes its own pass through provisioner. But I’ve never used it and using standalone InSpec or Serverspec is probably easier :)

How can Puppet fit into a Continuous Delivery tool chain?

I'm investigating Puppet as our future deployment and provisioning tool in our shop, but now I'm stuck at how to make a clever Continuous Integration/Delivery tool chain with deployment through Puppet.
In any of our environments (dev, test, qa, demo, prod) we have a range of components. We need to be able to deploy each component separately and possibly even concurrently.
I'd like a way to initiate (through script) a deploy of a single component package (=Puppet module) and gather the output and success status of that.
Simply waiting for a scheduled agent pull, or doing a 'puppet agent --test' on each node on the environment isn't good enough, because it may pick up other pending changes (I don't know if another component is also in the process of being deployed).
In my tool chain I would like the deployment output and status from component A and component B to be recorded separately and not mixed up.
So my question is: Can I use puppet to deploy one single named package (module) at a time?
And if not, where did I take a wrong turn when I drove down this path?
I realise a master-less Puppet set-up with modules and manifests replicated to each node perhaps could do it, but IMHO a master-less Puppet set-up kind of defeats the purpose of Puppet.
PS: I think what I'm trying to achieve is called 'Directed Orchestration' in Damon Edwards' very enlightening video at Integrating DevOps tools into a Service Delivery Platform (at timestamp around 22:30).
So my question is: Can I use puppet to deploy one single named package (module) at a time?
Yes, you can, via puppet apply. First you need to create a moduledir and a module that will contain your manifests. e.g. :
/scratch/user/puppet/local/ # This is your modulepath for local deployment
# Following contains the manifests for a module name "localmod"
/scratch/user/puppet/local/localmod/manifests/init.pp
# example content of init.pp
class localmod {
notify{"I am in in local module....":}
}
On that local machine you can test this module via puppet apply :
puppet apply -v --modulepath=/scratch/user/puppet/local -e "include localmod"
echo $? # Get the exit status of the above command
I watched the video at the point your video. There are two types of automation you can do.
Application build/deploy automation, which can be achieved via maven/ant (Build) and ant/capistrano/chrome/bash/msdeploy (Deploy) or as termed on that slide "Installer".
System/Infrastructure automation can be achieved via Chef/Puppet/CFEngine.
This question seems to be ... "How do I do applications build using puppet (implied as a system automation tool)"
So quite simply, oval tool in round hole. (I didn't say square)
At my company, we use Jenkins and the Build Pipeline Integration plugin to build massive multi component projects. As an example, a Java app will use ant in a build job, the next chained job will be a "deploy to dev" job which uses Capistrano to deploy the application, then the next job in the chain is "Configure Dev" which calls Chef to update the system configurations in the DEV environment. Chef is used to configure the application. Each of these jobs can be set to run automatically and sequentially.
a master-less Puppet set-up kind of defeats the purpose of Puppet.
Only if you discount
The rich DSL puppet has to offer
So many peer reviewed community modules
Otherwise, something like this gives you remote directed orchestration.
#update manifests etc (version control is the source of truth)
ssh user#host git pull
#run puppet
ssh user#host sudo puppet-apply

What is the correct way to use Chef from Ruby (Rails)?

I'm very new with Chef, maybe I search wrong but Google show a lot of quick starts and deployment options, but mostly on how to deploy an app from dev's console. What I need is to perform recipes from the Rails app.
I have a stack which includes Rails+Resque as a master and Chef as a slave. Chef is added as a gem chef, the chef/shef/ext used inside the app to run queries.
It should do several things, like create ssh users (which works) and deploy new app stacks (which don't).
As the chef gem doesn't have a lot of docs and ext doesn't feel like user (or dev) oriented too, I think there should be some other way to work with Chef server (knife?), or some kind of documentation on gem I definitely miss to work effitiantly with this.
We got stuck on something similar and ended up using the ridley gem:
As per this SO question.

How do I share code across Chef cookbooks in a chef-repo?

I would like to share a small handful of methods across recipes in a chef repo. I know that on a cookbook level I can put code in modules in the libraries directory (see related question). What I'm looking for is something like that but available across all of the cookbooks in my Chef repo.
I can think of a couple solutions:
Create a gem, install the gem as part of the chef run. This seems like overkill.
Put the file in some folder and add that folder to the $LOAD_PATH in the recipe file. I have a feeling that won't work with actual deployment because the chef server doesn't know anything about the repo.
Put the file in some folder and symlink that into the libraries directory of each cookbook.
The last option seems like the most viable. Is there a better/more idiomatic way to do what I want?
You can use a library defined function from another cookbook but you must teach Chef that your cookbook depends on the providing cookbook.
So, for example, if in cookbook A, you have a libraries/default.rb that provides some function f, you can access it from cookbook B so long as B's metadata.rb file includes the line:
depends "A"
See the Chef documentation on metadata and libraries for more details.
There are 3 distinct options allowing for sharing code in form of either chef resource (1. LWRP, 2. HWRP) or methods (3. "libraries"). I'd suggest you consider LWRPs first. I find this answer very good in explaining differences between mentioned techniques.

Create stand-alone system services in Ruby

I want to build application which servers as a stand-alone system service, always run on the backend and servers a front-end with a web interface.
Like we do in Linux /etc/init.d/apache2 start , Same as I want to server my application /etc/init.d/myapp start.
My major target is to deliver on Linux specially Ubuntu, whole app would be in plain Ruby and front-end would be in Sinatra.
I want to make it install with simple, gem install my_app and command line features get available to start the service. The application would be doing heavily processing and database insertion. And I want that its configurations must be set as in pure linux fashion, like /etc/apache2/apache2.conf
Can any one guide me in it? Also if possible, i want to secure the code, is there any possibilities for it?
I am using the Daemon-Kit gem for the same requirements. Works very well in production. The only thing it does not include is the configuration with a .conf file, but it's easy to do it yourself with Ruby code. You can deploy with Capistrano, no need to install.

Resources