I am trying to create tasks that sync from my production/staging environments to a local vagrant box.
I am hoping for a command like this: cap vagrant sync_production_database which would perform a database dump on the remote server, download it, and then import it on the vagrant box. Unfortunately, I can't find a way to execute a capistrano task on another environment.
I have my environments set up like so:
config
├── deploy
│ ├── production.rb
│ ├── staging.rb
│ └── vagrant.rb
└── deploy.rb
And here is an example of what I am trying to accomplish:
desc 'sync database'
task :sync_production_database do
# executed on remote server
# this is obviously not working
on(:production) do |host|
# dump database and download it
end
# executed on vagrant box
on roles(:web) do |host|
end
end
First, I think it is better to use the stage parameter of the cap command to designate the remote stage servers rather than your local stage servers. This means your command assumes :vagrant as always being the local stage.
Then if vagrant stage servers have a role which the remote servers don't have, you can execute different tasks on each stage via the following:
# Assuming the following stage definitions in deploy/production.rb and deploy/vagrant.rb respectively
server 'production.example.com', roles: %w{web app}
server 'vagrant.local', roles: %w{web localhost}
# the following will execute tasks on each host
desc 'sync database'
task :sync_database do
# executed on remote server(s)
on roles(:app) do |host|
# dump database and download it
end
# Load the servers in deploy/vagrant.rb
invoke(:vagrant)
# executed on vagrant box server(s)
on roles(:localhost) do |host|
# Create database and load dump from remote
end
end
This works because roles(...) returns all servers loaded with the given role and since each stage has a unique role, you can retrieve only what servers you want by specifying their respective role.
Normally, without invoke(:vagrant), roles(:localhost) in the example above wouldn't return anything since Capistrano only loads the servers defined in the given stage by default. To get around this, you can force load servers in your vagrant stage using invoke(:vagrant). So then, roles(:app) returns the servers for the given stage and roles(:localhost) returns your vagrant servers.
Related
I'm new to Ansible.
Currently I am working with a remote host that use Capistrano as package management agent.
When I run deploy script as follow:
- name: build source
shell: |
echo "bundle exec cap branch=staging stg deploy"
tags:
- build_source
Conifg capistrano is here
set :branch, lambda {
branch = Capistrano::CLI.ui.ask("[cap] Branch or Tag (default `master`): ")
branch.empty? ? "master" : branch
}
Therefore, the Ansible script will be stuck at ask step.
I wonder if there is any way to passing argument from an Ansible on local to capistrano on host machine
Many thanks in advance
Thank you.
I was looking for expect module as #Zeitounator suggested
I have a project to deploy with Capistrano 3.3.3. There are two different server machines: one is a webserver(role :app), the other is DB (role :db). On the DB server I have a Apache Solr service, and the devs need to update it's config files. These config files they store in a repository with the rest of project code. During deploy I need to upload this files to DB server to a solr directory. I have a legacy task that makes it.
desc 'Solr config update'
task :update_solr_config do
on roles(:app) do
execute "scp -i /home/user/dbserver.pem #{current_path}/stack/data-config-menu-produccion.xml user#dbserver:/usr/share/tomcat7/solr/menu/conf/data-config.xml"
execute "scp -i /home/user/dbserver.pem #{current_path}/stack/data-config-promociones-produccion.xml user#dbserver:/usr/share/tomcat7/solr/promociones/conf/data-config.xml"
execute "scp -i /home/user/dbserver.pem #{current_path}/stack/data-config-vista-produccion.xml user#dbserver:/usr/share/tomcat7/solr/vista/conf/data-config.xml"
end
end
But I'm think about what if there will be two DB servers? How have I to modify this task then?
I've read about Capistrano's methods upload, put, download, get and transfer. But I can't figure it out which of them could do a server-server file transfer. I suggest this task would be applied on :db roles to iterate each server in role.
desc 'Solr config update'
task :update_solr_config do
on roles(:db) do
...Some magic goes here...
end
end
Thanks for any help.
I'm trying to set up a workflow to develop Chef cookbooks locally. We're currently using Chef Server with the provisioned nodes using chef-client.
As part of the new workflow, we want to be able to start using Vagrant to test cookbooks locally to avoid incurring in the costs of testing on a remote machine in a cloud.
I'm able to launch and provision a local Vagrant machine, but the one thing I'm not really sure how to do is to have Chef load the local version of the cookbook, but still talk to the Chef server for everything else (environments, roles, data bags, etc.), so I don't have to upload the cookbook via knife every time I make a change I want to test. Is this possible?
In other words, can I make chef-client talk to the local chef-zero server only for the cookbooks but to the remote Chef server for everything else? Or maybe a different approach that would yield the same effect? I'm open to suggestions.
UPDATE
I think an example will help to express what I'm looking for. I'm realizing that this may not really be what I need, but I'm curious about how to achieve it anyway. In this scenario, a recipe reads from a databag stored in the remote Chef server
metadata.rb
name 'proxy-cookbook'
version '0.0.0'
.kitchen.yml
---
driver:
name: vagrant
provisioner:
name: chef_zero
platforms:
- name: ubuntu-12.04
suites:
- name: default
run_list:
- recipe[proxy-cookbook::default]
attributes:
recipes/default.rb
...
key = data_bag_item("key", "main")
....
Now, I know I can create something along the lines of:
data_bags/main.json
{
"id": "main",
"key": "s3cr3tk3y"
}
And have my kitchen tests read from that data bag; but that is exactly what I'm trying to avoid. Is it possible to either:
Instruct test-kitchen to get the actual data bag from chef server,
Have chef-zero retrieve a temporary copy of the data bags for local tests, or
Quickly "dump" the contents of a remote Chef server locally?
I hope that makes sense. I can add some context if necessary.
Test kitchen is the best way to drive vagrant. It provides the integration you're looking for with chef zero. Enables you to completely emulate your production chef setup locally and test your cookbook against multiple platforms.
Test kitchen has replaced the older workflows I used to have chef development. Very well worthwhile learning.
Example
Generate a demo cookbook that installs java using the community cookbook. Tools like Berkshelf (to manage cookbook dependencies) and chef zero are setup automatically.
chef generate cookbook demo
Creates the following files:
└── demo
├── .kitchen.yml
├── Berksfile
├── metadata.rb
├── recipes
│ └── default.rb
└── test
└── integration
├── default
│ └── serverspec
│ └── default_spec.rb
.kitchen.yml
Update the platform versions. Kitchen is told to use vagrant and chef zero.
---
driver:
name: vagrant
provisioner:
name: chef_zero
platforms:
- name: ubuntu-14.04
- name: centos-6.6
suites:
- name: default
run_list:
- recipe[demo::default]
attributes:
Berksfile
This file controls how cookbook dependencies are managed. The special "metadata" setting tells Berkshelf to refer to the cookbook metadata file.
source 'https://supermarket.chef.io'
metadata
metadata.rb
Add the "apt" and "java" cookbooks as a dependencies:
name 'demo'
..
..
depends "apt"
depends "java"
recipes/default.rb
include_recipe "apt"
include_recipe "java"
test/integration/default/serverspec/default_spec.rb
Test for the installation of the JDK package
require 'spec_helper'
describe package("openjdk-6-jdk") do
it { should be_installed }
end
Running the example
$ kitchen verify default-ubuntu-1404
-----> Starting Kitchen (v1.4.0)
..
..
Package "openjdk-6-jdk"
should be installed
Finished in 0.1007 seconds (files took 0.268 seconds to load)
1 example, 0 failures
Finished verifying <default-ubuntu-1404> (0m13.73s).
-----> Kitchen is finished. (0m14.20s)
Update
The following example demonstrates using test kitchen with roles (works for data bags and other items you want loaded into chef-zero):
Can the java cookbook be used to install a local copy of oracle java?
I think I found what I was looking for.
You can use knife to download the Chef server objects that you need. You can bootstrap this in .kitchen.yml so you don't have to do it manually every time.
.kitchen.yml
...
driver:
name: vagrant
pre_create_command: 'mkdir -p chef-server; knife download /data_bags /roles /environments --chef-repo-path chef-server/'
...
provisioner:
name: chef_zero
data_bags_path: chef-server/data_bags
roles_path: chef-server/roles
environments_path: chef-server/environments
client_rb:
environment: development
...
And then I just added the chef-server directory to .gitignore
.gitignore
chef-server/
There might be a less redundant way of doing this, but this works for me right now, and since I just wanted to document this, I'm leaving it like that.
I need to deploy to 2 different server and these 2 servers have different authentication methods (one is my university's server and the other is an amazon web server AWS)
I already have running capistrano for my university's server, but I don't know how to add the deployment to AWS since for this one I need to add ssh options for example to user the .pem file, like this:
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")]
ssh_options[:forward_agent] = true
I have browsed starckoverflow and no post mention about how to deal with different authentication methods this and this
I found a post that talks about 2 different keys, but this one refers to a server and a git, both usings different pem files. This is not the case.
I got to this tutorial, but couldn't find what I need.
I don't know if this is relevant for what I am asking: I am working on a rails app with ruby 1.9.2p290 and rails 3.0.10 and I am using an svn repository
Please any help os welcome. Thanks a lot
You need to use capistrano multi-stage. There is a gem that does this or you could just include an environments or stage file directly into the capfile.
You will not be able to deploy to these environments at the same time, but you could sequentially.
desc "deploy to dev environment"
task :dev do
set :stage_name, "dev"
set :user, "dev"
set :deploy_to, "/usr/applications/dev"
role :app, "10.1.1.1"
end
desc "deploy to aws environment"
task :aws do
set :stage_name, "aws"
set :user, "aws"
set :deploy_to, "/usr/applications/aws"
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")]
ssh_options[:forward_agent] = true
role :app, "10.2.2.2"
end
You would run:
cap dev deploy; cap aws deploy
You can expand this complexity to open VPNS, users, gateways, etc.
I'm trying to set up multiple roles, one for live, and another for dev. They look like this:
role :live, "example.com"
role :dev, "dev.example.com"
When I run cap deploy, however, it executes for both servers. I've tried the following and it always executes on both.
cap deploy live
cap ROLE=live deploy
What am I missing? I know I can write a custom task that only responds to one role, but I don't want to have to write a whole bunch of tasks just to tell it to respond to one role or another. Thanks!
Capistrano Multistage is definitely the solution to the example you posted for deploying to environments. In regard to your question of deploying to roles or servers, Capistrano has command-line solutions for that too.
To deploy to a single role (notice ROLES is plural):
cap ROLES=web deploy
To deploy to multiple roles:
cap ROLES=app,web deploy
To deploy to particular server (notice HOSTS is plural):
cap HOSTS=web1.myserver.com deploy
To deploy to several servers:
cap HOSTS=web1.myserver.com,web2.myserver.com deploy
To deploy to a server(s) with a role(s):
cap HOSTS=web1.myserver.com ROLES=db deploy
You can do something like this:
task :dev do
role :env, "dev.example.com"
end
task :prod do
role :env, "example.com"
end
Then use:
cap dev deploy
cap prod deploy
Just one more hint: if you use multistage remember to put ROLES constant before cap command.
ROLES=web cap production deploy
or after environment
cap production ROLES=web deploy
If you put as first parameter, multistage will treat it as stage name and replace with default one:
cap ROLES=web production deploy
* [...] executing `dev'
* [...] executing `production'
Try capistrano multistage:
http://weblog.jamisbuck.org/2007/7/23/capistrano-multistage
Roles are intended to deploy different segments on different servers, as apposed to deploying the whole platform to just one set of servers.