Capistrano Deploy to EC2, User Permissions - amazon-ec2

I have a rails app that I am trying to deploy to an ec2 instance using Capistrano. My deploy.rb:
set :application, "uc_social_server"
set :repository, "septerr#bitbucket.org/urbancoding/uc_social_server.git"
set :user, "ec2-user"
server "ec2-23-22-188-11.compute-1.amazonaws.com", :app, :web, :db, :primary => true
set :deploy_to, "/home/ec2-user/uc_social_server"
ssh_options[:keys] = %w(/Users/sony/.ssh/ec2-social-server-key.pem)
default_run_options[:pty] = true
Running cap deploy:check fails with:
The following dependencies failed. Please check them and try again:
--> You do not have permissions to write to `/home/ec2-user/uc_social_server/releases'. (ec2-23-22-188-11.compute-1.amazonaws.com)
I have tried some of the solutions I found on stackoverflow without success. What is the correct way to deploy to ec2 with capistrano?

Finally figured out the problem.
cap deploy:setup by default makes root the owner of the folders it creates.
So before you run cap deploy:setup you must remember to add set :use_sudo, false to your deploy.rb (or the capistrano script file you are using).
If like me you have already run the setup command resulting in a releases and shared folders with root ownership,
ssh to your ec2 machine and delete these folders
add set :use_sudo, false to your capistrano script (deploy.rb in
my case)
run cap deploy:setup
Now capistrano should have created the releases and shared folders with the user you specified in your capistrano script as the owner.
cap deploy:check should now succeed.

Related

Laravel deployment with capistrano cache files

I am having an issue while I'm trying to deploy a Laravel application onto an Ubuntu Server with Capistrano.
My deployment directory is /var/www/project_stage. When I deploy my project to that directory, everything works just fine. My project becomes live, every single line of code works just as it should.
But when I make a change and deploy a new version of same project, somehow (I'm guessing) my files are getting cached and not responding with the newest release, they are still responding as the old version that alredy being overwritten.
When I deploy the project to a different folder (etc: /var/www/project_stage2 instead of /var/www/project_stage) and change my Nginx config to serve from that folder, it works as it should again. But not in the second deploy to same directory. So I can say that I can deploy to a different directory every time, but I cannot deploy to same directory twice. It always responses as first deploy.
Here's what I've tried:
I checked whether the current directory of Capistrano is linked to
correct folder, which it has.
I checked whether the changes I made are visible on new deploy, which
they are. Files are absolutely changed on new deploy.
I checked whether Nginx is looking for correct release directory, it
has.
I tried to run php artisan cache:clear, route:clear,
view:clear, config:cache commands and I run composer
dump-autoload too. Nothing worked.
I changed Nginx's sendfile parameter to off and restarted, no
result.
I read a similar issue on this question, but it didn't work on
my case.
Here is my deploy.rb:
#deploy_path inherited from staging.rb
lock "~> 3.10.1"
set :application, "project_stage"
set :repo_url, "MY REPO HERE"
set :keep_releases, 10
set :laravel_dotenv_file, "./.env.staging"
namespace :deploy do
before :updated, :easy do
on roles(:all) do |host|
execute :chmod, "-R 777 #{deploy_path}/shared/storage/logs"
execute :chmod, "-R 777 #{deploy_path}/shared/storage/framework"
end
end
after :finished, :hard do
on roles(:all) do |host|
end
end
desc "Build"
after :updated, :build do
on roles(:web) do
within release_path do
execute :php, "artisan clear-compiled"
execute :php, "artisan cache:clear"
execute :php, "artisan view:clear"
execute :php, "artisan route:cache"
execute :php, "artisan config:cache"
end
end
end
end #end deploy namespace
I am using PHP7.0 (FPM with unix socket), Nginx, Laravel5, Capistrano3 (with capsitano/laravel gem), Ubuntu Server 16.4.
The problem you are describing could occur if you are using OPcache with opcache.validate_timestamps set to zero. With validate_timestamps set to zero, OPcache never checks for a newer version of the file. This improves performance slightly, but it means you will need to manually flush the cache.
There are two things you can do to resolve the issue:
Set opcache.validate_timestamps to 1 in your php.ini. This will result in a small performance decrease.
...or flush the cache during your deployment, after the new files have been deployed, by calling opcache_reset() in a PHP script.
Note that because you are using php-fpm, you should be able to flush the cache from the cli. If you were using Apache with mod_php you would need to flush the cache in a script invoked by Apache (through an HTTP request) rather than from the cli. The cache must be flushed in the context that your application runs in.

How to execute a rake task using mina?

I want to run a rake task (migrate) contained in my Rakefile in my Sinatra app. I am using Mina to deploy. rake migrate works great if I run it on the server or on my development, but I cannot get Mina to execute the task.
My current deploy looks like this within config/deploy.rb
task :deploy => :environment do
deploy do
# Put things that will set up an empty directory into a fully set-up
# instance of your project.
invoke :'git:clone'
invoke :'deploy:link_shared_paths'
to :launch do
queue "sudo /opt/nginx/sbin/nginx -s reload"
end
end
end
I tried both queue "rake migrate" and queue "#{rake} migrate" within the deploy block and within the launch block but it always complains bash: command not found
in Mina, use ssh to execute rake not quite a smart move.
mina 'rake[rake_taks:taks_whatever_you_write]' on= environment
that is better.
Mina uses ssh to run remote commands. That means that the commands run in a different environment as when you log in. This is causing problems with rvm and rbenv as they are not initialised properly. Luckily, mina has rvm support, you just have to set it up:
require 'mina/rvm'
task :environment do
invoke :'rvm:use[ruby-1.9.3-p125#gemset_name]'
end
task :deploy => :environment do
...
end
You can do a similar thing for rbenv (documentation)

Capistrano deploying to different servers with different authentication methods

I need to deploy to 2 different server and these 2 servers have different authentication methods (one is my university's server and the other is an amazon web server AWS)
I already have running capistrano for my university's server, but I don't know how to add the deployment to AWS since for this one I need to add ssh options for example to user the .pem file, like this:
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")]
ssh_options[:forward_agent] = true
I have browsed starckoverflow and no post mention about how to deal with different authentication methods this and this
I found a post that talks about 2 different keys, but this one refers to a server and a git, both usings different pem files. This is not the case.
I got to this tutorial, but couldn't find what I need.
I don't know if this is relevant for what I am asking: I am working on a rails app with ruby 1.9.2p290 and rails 3.0.10 and I am using an svn repository
Please any help os welcome. Thanks a lot
You need to use capistrano multi-stage. There is a gem that does this or you could just include an environments or stage file directly into the capfile.
You will not be able to deploy to these environments at the same time, but you could sequentially.
desc "deploy to dev environment"
task :dev do
set :stage_name, "dev"
set :user, "dev"
set :deploy_to, "/usr/applications/dev"
role :app, "10.1.1.1"
end
desc "deploy to aws environment"
task :aws do
set :stage_name, "aws"
set :user, "aws"
set :deploy_to, "/usr/applications/aws"
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")]
ssh_options[:forward_agent] = true
role :app, "10.2.2.2"
end
You would run:
cap dev deploy; cap aws deploy
You can expand this complexity to open VPNS, users, gateways, etc.

Capistrano deploy fails after I changed the repository URL

I have a simple deployment via capistrano from a Git repository. At first I was deploying form GitHub, everything worked just fine. But then I moved my repository to BitBucket and now I'm getting
fatal: Could not parse object '9cfb...'.
The problem goes away once I change
set :deploy_via, :remote_cache
to
set :deploy_via, :copy
but that doesn't fix the problem, it only bypasses it. Is there any way I can tell capistrano to just drop the old cache?
Capistrano 2.X
Delete and re-clone the repo using the new address:
cd $deploy_to/shared
rm -rf cached-copy
git clone ssh://git#example.org/new/repo.git cached-copy
Modify your config/deploy.rb to use the new repo:
set :repository, "ssh://git#example.org/new/repo.git"
set :scm, :git
set :deploy_via, :remote_cache
Deploy again:
cap deploy
Capistrano 3.X
Remove the $deploy_to/repo directory
Modify your config/deploy.rb (same as 2.X)
cap deploy
I gotta say I’m not sure, since I haven’t been able to test this but this should work:
cap deploy:cleanup -s keep_releases=0
Since it wipes every release (cache) from the server.
Apparently you will also need to remove shared/cached-copy, because this doesn’t seem to be cleaned by the Capistrano call above according to the comment below.
Capistrano 2 and below
SSH to your server and update the repo in ./shared/cached-copy/.git/config of the deployment folder, or just remove the ./shared/cached-copy
Capistrano 3 and above
SSH to your server and update the repo in ./repo/config of the deployment folder.
Check Fixing Capistrano 3 deployments after a repository change
I solved this with the following in deploy.rb:
namespace :deploy do
task :cope_with_git_repo_relocation do
run "if [ -d #{shared_path}/cached-copy ]; then cd #{shared_path}/cached-copy && git remote set-url origin #{repository}; else true; fi"
end
end
before "deploy:update_code", "deploy:cope_with_git_repo_relocation"
It makes deploys a little slower, so it's worth removing once you're comfortable that all your deploy targets have caught up.
You need to change git origin in your /shared/cached-copy folder
cd /var/www/your-project/production/shared/cached-copy
git remote remove origin
git remote add origin git#bitbucket.org:/origin.git
try cap production deploy
The most simple way is just changing the repo url to the new one in .git/config in the shared/cached-copy directory on the webserver. Then you can do a normal deploy as usual.
Depends on your version Capistrano 3 is different from it's older ancestors:
Read my original answer here and how to fix similar issues Capistrano error when change repository using git
If you need to do a lot of repo's you might want to add a task for it.
For capistrano 3 you add this task in your deploy.rb
desc "remove remote git cache repository"
task :remove_git_cache_repo do
on roles(:all) do
execute "cd #{fetch(:deploy_to)} && rm -Rf repo"
end
end
And then run it once for every stage:
cap testing remove_git_cache_repo
Here's the Capistrano 3 version of what this answer talks about. It might be tedious to do what the answer suggests on each server.
So drop this in deploy.rb and then run cap <environment> deploy:fix_repo_origin
namespace :deploy do
desc 'Fix repo origin, for use when changing git repo URLs'
task :fix_repo_origin do
on roles(:web) do
within repo_path do
execute(:git, "remote set-url origin #{repo_url}")
end
end
end
end
For Capistrano 3.0+
Change the repository URL in your config/deploy.rb
Change the repository URL in the your_project/repo/config file on the server.

Deploy to only one role / server with capistrano

I'm trying to set up multiple roles, one for live, and another for dev. They look like this:
role :live, "example.com"
role :dev, "dev.example.com"
When I run cap deploy, however, it executes for both servers. I've tried the following and it always executes on both.
cap deploy live
cap ROLE=live deploy
What am I missing? I know I can write a custom task that only responds to one role, but I don't want to have to write a whole bunch of tasks just to tell it to respond to one role or another. Thanks!
Capistrano Multistage is definitely the solution to the example you posted for deploying to environments. In regard to your question of deploying to roles or servers, Capistrano has command-line solutions for that too.
To deploy to a single role (notice ROLES is plural):
cap ROLES=web deploy
To deploy to multiple roles:
cap ROLES=app,web deploy
To deploy to particular server (notice HOSTS is plural):
cap HOSTS=web1.myserver.com deploy
To deploy to several servers:
cap HOSTS=web1.myserver.com,web2.myserver.com deploy
To deploy to a server(s) with a role(s):
cap HOSTS=web1.myserver.com ROLES=db deploy
You can do something like this:
task :dev do
role :env, "dev.example.com"
end
task :prod do
role :env, "example.com"
end
Then use:
cap dev deploy
cap prod deploy
Just one more hint: if you use multistage remember to put ROLES constant before cap command.
ROLES=web cap production deploy
or after environment
cap production ROLES=web deploy
If you put as first parameter, multistage will treat it as stage name and replace with default one:
cap ROLES=web production deploy
* [...] executing `dev'
* [...] executing `production'
Try capistrano multistage:
http://weblog.jamisbuck.org/2007/7/23/capistrano-multistage
Roles are intended to deploy different segments on different servers, as apposed to deploying the whole platform to just one set of servers.

Resources