Capistrano deploying to different servers with different authentication methods - amazon-ec2

I need to deploy to 2 different server and these 2 servers have different authentication methods (one is my university's server and the other is an amazon web server AWS)
I already have running capistrano for my university's server, but I don't know how to add the deployment to AWS since for this one I need to add ssh options for example to user the .pem file, like this:
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")]
ssh_options[:forward_agent] = true
I have browsed starckoverflow and no post mention about how to deal with different authentication methods this and this
I found a post that talks about 2 different keys, but this one refers to a server and a git, both usings different pem files. This is not the case.
I got to this tutorial, but couldn't find what I need.
I don't know if this is relevant for what I am asking: I am working on a rails app with ruby 1.9.2p290 and rails 3.0.10 and I am using an svn repository
Please any help os welcome. Thanks a lot

You need to use capistrano multi-stage. There is a gem that does this or you could just include an environments or stage file directly into the capfile.
You will not be able to deploy to these environments at the same time, but you could sequentially.
desc "deploy to dev environment"
task :dev do
set :stage_name, "dev"
set :user, "dev"
set :deploy_to, "/usr/applications/dev"
role :app, "10.1.1.1"
end
desc "deploy to aws environment"
task :aws do
set :stage_name, "aws"
set :user, "aws"
set :deploy_to, "/usr/applications/aws"
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")]
ssh_options[:forward_agent] = true
role :app, "10.2.2.2"
end
You would run:
cap dev deploy; cap aws deploy
You can expand this complexity to open VPNS, users, gateways, etc.

Related

Permanently switching user in Capistrano 3 (separate authorization & deploy)

We have following pattern in server management - all users do have their own user, but deploy is fully performed by special deploy user, without direct login possibility.
We used this method in Capistrano 2.x:
default_run_options[:shell] = "sudo -u deploy bash"
$ cap stage deploy -s user=thisisme
I'm aware that Capistrano 3.x has method to switch user directly:
task :install do
on roles(:all) do
as :deploy do
execute :whoami
end
end
end
But this code will fill all tasks, and default tasks will not inherit deploy user anyway. Is it ever possible to set up login user directly without dragging this code to each task?
Since I had received no proper answer and didn't got the idea myself, I decided to ask authors. Capistrano 3.x uses SSHKit to manage remote execution commands, and here's their answer:
You could try overriding the command map such that every command gets prefixed with the desired sudo string. https://github.com/capistrano/sshkit/blob/master/README.md#the-command-map
SSHKit.config.command_map = Hash.new do |hash, command|
hash[command] = "<<sudo stuff goes here>> #{command}"
end
The documentation says "this may not be wise, but it would be possible". YMMV

Capistrano Deploy to EC2, User Permissions

I have a rails app that I am trying to deploy to an ec2 instance using Capistrano. My deploy.rb:
set :application, "uc_social_server"
set :repository, "septerr#bitbucket.org/urbancoding/uc_social_server.git"
set :user, "ec2-user"
server "ec2-23-22-188-11.compute-1.amazonaws.com", :app, :web, :db, :primary => true
set :deploy_to, "/home/ec2-user/uc_social_server"
ssh_options[:keys] = %w(/Users/sony/.ssh/ec2-social-server-key.pem)
default_run_options[:pty] = true
Running cap deploy:check fails with:
The following dependencies failed. Please check them and try again:
--> You do not have permissions to write to `/home/ec2-user/uc_social_server/releases'. (ec2-23-22-188-11.compute-1.amazonaws.com)
I have tried some of the solutions I found on stackoverflow without success. What is the correct way to deploy to ec2 with capistrano?
Finally figured out the problem.
cap deploy:setup by default makes root the owner of the folders it creates.
So before you run cap deploy:setup you must remember to add set :use_sudo, false to your deploy.rb (or the capistrano script file you are using).
If like me you have already run the setup command resulting in a releases and shared folders with root ownership,
ssh to your ec2 machine and delete these folders
add set :use_sudo, false to your capistrano script (deploy.rb in
my case)
run cap deploy:setup
Now capistrano should have created the releases and shared folders with the user you specified in your capistrano script as the owner.
cap deploy:check should now succeed.

Define ActiveRecord connection for given Resque queue

Problem Domain
Heroku Cedar stack, with multiple databases. RDS for main database, and Postgres for a second Analytics database. Server runs using the read/write RDS and Postgres databases. Nightly rake task, which are run in a different environment, needs to run a specific Resque queue in a read-only slave of the RDS database.
Postgres DB connection
For the record, all models in the Postgres database include:
module Trove::PostgresConnection
def self.included(base)
base.class_eval do
…set up Postgres database
end
end
end
This works fine, and, being a module injected into each class, does not get squashed by any changes to ActiveRecord::Base.connection
MySQL Connection
Defined using the Heroku RDS plugin. Connection is made to the read/write production database. Unfortunately, this connection is used regardless of environment. Thus, running a rake task on Heroku using RAILS_ENV=analytics rake some:task does not use this connection for ActiveRecord::Base:
analytics:
adapter: mysql2
encoding: utf8
database: dbase
username: uname
password: pword
host: read-only.uri.on.amazonaws.com
reconnect: true
port: 3306
Rather, it uses the connection string provided in the RDS connection:
puts Rails.env
-> 'analytics'
puts SomeModel.connection_config[:host]
-> read-write.uri.on.amazonaws.com
Took me a while to figure that out. Note to self: don't just look at environment, look at database host.
Current Workaround
# Perform an operation using another database connection in ActiveRecord
module SwapConnection
def with_connection_to(config, &block)
ActiveRecord::Base.connection.disconnect!
ActiveRecord::Base.establish_connection(config)
yield
end
end
require 'swap_connection'
class TroveCalculations
#queue = :trove_queue
def self.perform(class_name, id)
SwapConnection.with_connection_to(Rails.env) do
Do something in a given queue
end
end
end
Desired ability
Have something like this in the Procfile
troveworker: env RAILS_ENV=analytics QUEUE=trove_queue bundle exec rake resque:work
which actually uses the analytics database.yml config for that worker. We currently run our server with this Procfile, but it still uses the RDS database.
To expand on my comment on the question, I meant adding a config for your DB the "Heroku way" and then referencing it in your Procfile for the one worker that will process jobs on that queue.
Add a config/environment variable with the DB config you need using a new name:
heroku config:add ANALYTICS_DB=postgres://some_url
And in your Procfile, based on your example of what you want:
troveworker: env DATABASE_URL=$(ANALYTICS_DB) QUEUE=trove_queue \
bundle exec rake resque:work
You'll have to use separate workers for each queue with a different config this way, but the config will be out of the code at least.
I've only played with Heroku, but I thought the database connection info was overridden by the Heroku tools based on environment variables specified by the heroku toolbelt.
The issue here is Heroku generates its own database.yml file: https://devcenter.heroku.com/articles/ruby-support#build-behavior
By using the Amazon RDS addon, Heroku sets a DATABASE_URL environment variable. You can see its contents by running the following from the root of your applications directory:
heroku config
Also, as of Rails 3.2, it will use a DATABASE_URL env var (if set) instead of a database.yml file:
https://github.com/mperham/sidekiq/issues/503#issuecomment-11862427
The simplest workaround might be to:
create an env var called DATABASE_URL_ANALYTICS w/ the Postgres connection string:
heroku config:add DATABASE_URL_ANALYTICS=postgres://xxxxxxxxxxxx
At the beginning of your rake file (before any rails initialization might occur), add:
ENV['DATABASE_URL'] = ENV['DATABASE_URL_ANALYTICS'] if Rails.env.analytics?
Update: Not working. (original answer kept for documentation)
This is how we've solved it:
Procfile:
troveworker: env RAILS_ENV=analytics QUEUE=trove_queue rake trove:worker
lib/tasks/trove.rake:
desc 'Start the Resque workers in the proper environment'
task :worker do
SwapConnection.with_connection_to Rails.env do
Rake::Task['resque:work'].invoke
end
end
This solution solves some other issues for us as well, and works quite nicely. Thanks everyone.

How does Sinatra know which environment to use?

I uploaded a Sinatra app to the server (heroku). But it seems like the app acts itself like it's at a localhost unlike my another Rails app which works well there.
So how do I check if my Sinatra app uses the correct environment or not? And how does Sinatra know which environment to use?
By nature heroku will take care of setting the environment. By default it's "production". In case you have different config/behavior for different use case, you would have to code that first.
For example
if ENV=="production"
# do something
elsif ENV=="staging"
# do something else
end
I am not sure why would you want to set environment explicitly to "production" or something else. That should be left at discretion of hosting environment.
Update
More info on Heroku documentation
Further update
heroku run printenv
above should list environment variables.
I add an environment variable to all my heroku instances:
heroku config:add APP_NAME=<myappname>
Then, for Sinatra, I have the following in the config.ru:
# detect environments and setup some passwords
case ENV['APP_NAME']
when 'prod-damon'
# whatever for production
when 'dev-damon'
# whatever for development on Heroku
else
# whatever for local
end

Deploy to only one role / server with capistrano

I'm trying to set up multiple roles, one for live, and another for dev. They look like this:
role :live, "example.com"
role :dev, "dev.example.com"
When I run cap deploy, however, it executes for both servers. I've tried the following and it always executes on both.
cap deploy live
cap ROLE=live deploy
What am I missing? I know I can write a custom task that only responds to one role, but I don't want to have to write a whole bunch of tasks just to tell it to respond to one role or another. Thanks!
Capistrano Multistage is definitely the solution to the example you posted for deploying to environments. In regard to your question of deploying to roles or servers, Capistrano has command-line solutions for that too.
To deploy to a single role (notice ROLES is plural):
cap ROLES=web deploy
To deploy to multiple roles:
cap ROLES=app,web deploy
To deploy to particular server (notice HOSTS is plural):
cap HOSTS=web1.myserver.com deploy
To deploy to several servers:
cap HOSTS=web1.myserver.com,web2.myserver.com deploy
To deploy to a server(s) with a role(s):
cap HOSTS=web1.myserver.com ROLES=db deploy
You can do something like this:
task :dev do
role :env, "dev.example.com"
end
task :prod do
role :env, "example.com"
end
Then use:
cap dev deploy
cap prod deploy
Just one more hint: if you use multistage remember to put ROLES constant before cap command.
ROLES=web cap production deploy
or after environment
cap production ROLES=web deploy
If you put as first parameter, multistage will treat it as stage name and replace with default one:
cap ROLES=web production deploy
* [...] executing `dev'
* [...] executing `production'
Try capistrano multistage:
http://weblog.jamisbuck.org/2007/7/23/capistrano-multistage
Roles are intended to deploy different segments on different servers, as apposed to deploying the whole platform to just one set of servers.

Resources