I'm trying to set up multiple roles, one for live, and another for dev. They look like this:
role :live, "example.com"
role :dev, "dev.example.com"
When I run cap deploy, however, it executes for both servers. I've tried the following and it always executes on both.
cap deploy live
cap ROLE=live deploy
What am I missing? I know I can write a custom task that only responds to one role, but I don't want to have to write a whole bunch of tasks just to tell it to respond to one role or another. Thanks!
Capistrano Multistage is definitely the solution to the example you posted for deploying to environments. In regard to your question of deploying to roles or servers, Capistrano has command-line solutions for that too.
To deploy to a single role (notice ROLES is plural):
cap ROLES=web deploy
To deploy to multiple roles:
cap ROLES=app,web deploy
To deploy to particular server (notice HOSTS is plural):
cap HOSTS=web1.myserver.com deploy
To deploy to several servers:
cap HOSTS=web1.myserver.com,web2.myserver.com deploy
To deploy to a server(s) with a role(s):
cap HOSTS=web1.myserver.com ROLES=db deploy
You can do something like this:
task :dev do
role :env, "dev.example.com"
end
task :prod do
role :env, "example.com"
end
Then use:
cap dev deploy
cap prod deploy
Just one more hint: if you use multistage remember to put ROLES constant before cap command.
ROLES=web cap production deploy
or after environment
cap production ROLES=web deploy
If you put as first parameter, multistage will treat it as stage name and replace with default one:
cap ROLES=web production deploy
* [...] executing `dev'
* [...] executing `production'
Try capistrano multistage:
http://weblog.jamisbuck.org/2007/7/23/capistrano-multistage
Roles are intended to deploy different segments on different servers, as apposed to deploying the whole platform to just one set of servers.
Related
I have a rails application that is deployed on AWS EC2 instance with CodePipeline. I have added the Build stage in the pipeline using AWS CodeBuild to build test my code.
I have no idea about where to add below rails command to execute whenever code auto-deploy using the pipeline.
bundle install
rake db:migrate, create, assets compile
Restart sidekiq
You need to use CodeDeploy service as part of your CodePipeline. The pipeline will have two stages, one source stage (taking source from GitHub or CodeCommit etc) and second deploy stage (deploy to EC2 using CodeDeploy).
The CodeDeploy agent will be running on the EC2 instance and will take deployment command from the service. CodeDeploy deployments need an AppSpec file that provides the details of where to copy the source file on the EC2 instance and then run some scripts on the instance ("hooks") where you will do the commands like 'bundle install' or 'restart sidekik' etc.
Instead of me trying to list every step, I found a few resources that may get you started. Try the first tutorial which will help you understand the complete picture (CodeDeploy + CoedPipeline):
https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-codecommit.html
https://dev.to/mknycha/deploying-ruby-app-using-aws-codedeploy-to-ec2-how-to-do-it-with-bundler-49gg
How to write appspec.yml for Ruby on Rails on AWS CodeDeploy
The Heroku docs states ...
Your Heroku app runs in at least two environments:
On your local machine (i.e., development).
Deployed to the Heroku platform (i.e., production)
Ideally, your app should run in two additional environments:
Test, for running the app’s test suite safely in isolation
Staging, for running a new build of the app in a production-like > setting before promoting it
https://devcenter.heroku.com/articles/multiple-environments#managing-staging-and-production-configurations
however, the heroku pipeline interface only offered 'staging' and 'production' as options. How do I create a 'test' stage in my pipeline? Are the docs out of date or am I misunderstanding the functionality?
Right now, I deploy my (Spring Boot) application to EC2 instance like:
Build JAR file on local machine
Deploy/Upload JAR via scp command (Ubuntu) from my local machine
I would like to automate that process, but:
without using Jenkins + Rundeck CI/CD tools
without using AWS CodeDeploy service since that does not support GitLab
Question: Is it possible to perform 2 simple steps (that are now done manualy - building and deploying via scp) with GitLab CI/CD tools and if so, can you present simple steps to do it.
Thanks!
You need to create a .gitlab-ci.yml file in your repository with CI jobs defined to do the two tasks you've defined.
Here's an example to get you started.
stages:
- build
- deploy
build:
stage: build
image: gradle:jdk
script:
- gradle build
artifacts:
paths:
- my_app.jar
deploy:
stage: deploy
image: ubuntu:latest
script:
- apt-get update
- apt-get -y install openssh-client
- scp my_app.jar target.server:/my_app.jar
In this example, the build job run a gradle container and uses gradle to build the app. GitLab CI artifacts are used to capture the built jar (my_app.jar), which will be passed on to the deploy job.
The deploy job runs an ubuntu container, installs openssh-client (for scp), then executes scp to open my_app.jar (passed from the build job) to the target server.
You have to fill in the actual details of building and copying your app. For secrets like SSH keys, set project level CI/CD variables that will be passed in to your CI jobs.
Create shell file with the following contents.
#!/bin/bash
# Copy JAR file to EC2 via SCP with PEM in home directory (usually /home/ec2-user)
scp -i user_key.pem file.txt ec2-user#my.ec2.id.amazonaws.com:/home/ec2-user
#SSH to EC2 Instnace
ssh -T -i "bastion_keypair.pem" ec2-user#y.ec2.id.amazonaws.com /bin/bash <<-'END2'
#The following commands will be executed automatically by bash.
#Consdier this as remote shell script.
killall java
java -jar ~/myJar.jar server ~/config.yml &>/dev/null &
echo 'done'
#Once completed, the shell will exit.
END2
In 2020, this should be easier with GitLab 13.0 (May 2020), using an older feature Auto DevOps (introduced in GitLab 11.0, June 2018)
Auto DevOps provides pre-defined CI/CD configuration allowing you to automatically detect, build, test, deploy, and monitor your applications.
Leveraging CI/CD best practices and tools, Auto DevOps aims to simplify the setup and execution of a mature and modern software development lifecycle.
Overview
But now (May 2020):
Auto Deploy to ECS
Until now, there hasn’t been a simple way to deploy to Amazon Web Services. As a result, Gitlab users had to spend a lot of time figuring out their own configuration.
In Gitlab 13.0, Auto DevOps has been extended to support deployment to AWS!
Gitlab users who are deploying to AWS Elastic Container Service (ECS) can now take advantage of Auto DevOps, even if they are not using Kubernetes. Auto DevOps simplifies and accelerates delivery and cloud deployment with a complete delivery pipeline out of the box. Simply commit code and Gitlab does the rest! With the elimination of the complexities, teams can focus on the innovative aspects of software creation!
In order to enable this workflow, users need to:
define AWS typed environment variables: ‘AWS_ACCESS_KEY_ID’ ‘AWS_ACCOUNT_ID’ and ‘AWS_REGION’, and
enable Auto DevOps.
Then, your ECS deployment will be automatically built for you with a complete, automatic, delivery pipeline.
See documentation and issue
So, if I run the command heroku ps:restart event_machine.1 --app app-name I get what I want. However, I'm trying to automate our travis-ci deploy process. What needs to happen is the following:
We have a successful test run.
Next, we deploy the code
If we deploy the code successfully, we need to execute a few rake tasks that tell an external service to rebuild it self.
Once this is fired off, we need to restart the heroku app. In travis, ideally, this would be executed on the heroku machine via a deploy run command. This would be done in much the same way that we run bundle exec db:migrate.
Does anyone have any thoughts on how we we can restart a particular dyno(s) via a command that can be ran via heroku run something as that is what travis is executing in the deploy run.
So, to answer this we had a procfile that is executing a rake command to spin up event machine. We've modified this at the proc file level to first tell the external service to rebuild it self, before starting the event machine. This takes travis completely out of the deployment loop, which is better because it allows Heroku and Travis to each do what they should be responsible for.
I need to deploy to 2 different server and these 2 servers have different authentication methods (one is my university's server and the other is an amazon web server AWS)
I already have running capistrano for my university's server, but I don't know how to add the deployment to AWS since for this one I need to add ssh options for example to user the .pem file, like this:
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")]
ssh_options[:forward_agent] = true
I have browsed starckoverflow and no post mention about how to deal with different authentication methods this and this
I found a post that talks about 2 different keys, but this one refers to a server and a git, both usings different pem files. This is not the case.
I got to this tutorial, but couldn't find what I need.
I don't know if this is relevant for what I am asking: I am working on a rails app with ruby 1.9.2p290 and rails 3.0.10 and I am using an svn repository
Please any help os welcome. Thanks a lot
You need to use capistrano multi-stage. There is a gem that does this or you could just include an environments or stage file directly into the capfile.
You will not be able to deploy to these environments at the same time, but you could sequentially.
desc "deploy to dev environment"
task :dev do
set :stage_name, "dev"
set :user, "dev"
set :deploy_to, "/usr/applications/dev"
role :app, "10.1.1.1"
end
desc "deploy to aws environment"
task :aws do
set :stage_name, "aws"
set :user, "aws"
set :deploy_to, "/usr/applications/aws"
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")]
ssh_options[:forward_agent] = true
role :app, "10.2.2.2"
end
You would run:
cap dev deploy; cap aws deploy
You can expand this complexity to open VPNS, users, gateways, etc.