Permanently switching user in Capistrano 3 (separate authorization & deploy) - ruby

We have following pattern in server management - all users do have their own user, but deploy is fully performed by special deploy user, without direct login possibility.
We used this method in Capistrano 2.x:
default_run_options[:shell] = "sudo -u deploy bash"
$ cap stage deploy -s user=thisisme
I'm aware that Capistrano 3.x has method to switch user directly:
task :install do
on roles(:all) do
as :deploy do
execute :whoami
end
end
end
But this code will fill all tasks, and default tasks will not inherit deploy user anyway. Is it ever possible to set up login user directly without dragging this code to each task?

Since I had received no proper answer and didn't got the idea myself, I decided to ask authors. Capistrano 3.x uses SSHKit to manage remote execution commands, and here's their answer:
You could try overriding the command map such that every command gets prefixed with the desired sudo string. https://github.com/capistrano/sshkit/blob/master/README.md#the-command-map
SSHKit.config.command_map = Hash.new do |hash, command|
hash[command] = "<<sudo stuff goes here>> #{command}"
end
The documentation says "this may not be wise, but it would be possible". YMMV

Related

Running cron job on localhost using Whenever gem

Currently I have a Ruby and Sinatra project in which I am trying to automate a task. I have followed all the installation steps for the Whenever gem and written a function in my schedule.rb file that looks like this:
set :output, "./cron_log.log"
every 1.minutes do
command "ruby ./scripts/my_script.rb"
end
My project will eventually be deployed to a remote server, but I do not have access to that at the moment. Instead, I am testing it all by running it on my local machine using the command bundle exec shotgun -o 0.0.0.0 -p 9393. When I run it, none of the functions in my schedule.rb file are run. I ran a test by putting require './scripts/my_script.rb' at the top of the schedule.rb file. Whenever I run bundle exec whenever, that outside function will execute, but not the function inside the every block. I figured maybe it only works on a live server, so I ran the bundle exec shotgun -o 0.0.0.0 -p 9393 command to start my server on localhost but nothing in the schedule.rb file was called, including the outside function. I tried putting the command system("bundle exec whenever") in my authentication file, which gets activated every time a user loads the home page of my website, and the outside function in schedule.rb does get called, but the every function does not. Even if it did work like this, I don't want the file to get called every single time a user accesses the home page.
I also tried putting require './config/schedule' in the authentication file and that just completely breaks the website and gives this error message:
Boot Error
Something went wrong while loading config.ru
NoMethodError: undefined method `every' for main:Object
Here is part of the output when running the crontab -l command:
# Begin Whenever generated tasks for: /file_path_redacted/config/schedule.rb at: 2022-10-21 18:50:21 -0500
* * * * * /bin/bash -l -c 'ruby ./scripts/my_script.rb >> ./cron_log.log 2>&1'
# End Whenever generated tasks for: /file_path_redacted/config/schedule.rb at: 2022-10-21 18:50:21 -0500
So, my question is this: How and when is the schedule.rb file supposed to get called? And is it important that I deploy the project to the remote server for the cron job to work? Again, it is not possible for me to deploy it, as it must be fully working before being deployed. If it is possible to run on localhost, what am I doing wrong?

Codebuild Workflow with environment variables

I have a monolith github project that has multiple different applications that I'd like to integrate with an AWS Codebuild CI/CD workflow. My issue is that if I make a change to one project, I don't want to update the other. Essentially, I want to create a logical fork that deploys differently based on the files changed in a particular commit.
Basically my project repository looks like this:
- API
-node_modules
-package.json
-dist
-src
- REACTAPP
-node_modules
-package.json
-dist
-src
- scripts
- 01_install.sh
- 02_prebuild.sh
- 03_build.sh
- .ebextensions
In terms of Deployment, my API project gets deployed to elastic beanstalk and my REACTAPP gets deployed as static files to S3. I've tried a few things but decided that the only viable approach is to manually perform this deploy step within my own 03_build.sh script - because there's no way to build this dynamically within Codebuild's Deploy step (I could be wrong).
Anyway, my issue is that I essentially need to create a decision tree to determine which project gets excecuted, so if I make a change to API and push, it doesn't automatically deploy REACTAPP to S3 unnecessarliy (and vica versa).
I managed to get this working on localhost by updating environment variables at certain points in the build process and then reading them in separate steps. However this fails on Codedeploy because of permission issues i.e. I don't seem to be able to update env variables from within the CI process itself.
Explicitly, my buildconf.yml looks like this:
version: 0.2
env:
variables:
VARIABLES: 'here'
AWS_ACCESS_KEY_ID: 'XXXX'
AWS_SECRET_ACCESS_KEY: 'XXXX'
AWS_REGION: 'eu-west-1'
AWS_BUCKET: 'mybucket'
phases:
install:
commands:
- sh ./scripts/01_install.sh
pre_build:
commands:
- sh ./scripts/02_prebuild.sh
build:
commands:
- sh ./scripts/03_build.sh
I'm running my own shell scripts to perform some logic and I'm trying to pass variables between scripts: install->prebuild->build
To give one example, here's the 01_install.sh where I diff each project version to determine whether it needs to be updated (excuse any minor errors in bash):
#!/bin/bash
# STAGE 1
# _______________________________________
# API PROJECT INSTALL
# Do if API version was changed in prepush (this is just a sample and I'll likely end up storing the version & previous version within the package.json):
if [[ diff ./api/version.json ./api/old_version.json ]] > /dev/null 2>&1
## then
echo "🤖 Installing dependencies in API folder..."
cd ./api/ && npm install
## Set a variable to be used by the 02_prebuild.sh script
TEST_API="true"
export TEST_API
else
echo "No change to API"
fi
# ______________________________________
# REACTAPP PROJECT INSTALL
# Do if REACTAPP version number has changed (similar to above):
...
Then in my next stage I read these variables to determine whether I should run tests on the project 02_prebuild.sh:
#!/bin/bash
# STAGE 2
# _________________________________
# API PROJECT PRE-BUILD
# Do if install was initiated
if [[ $TEST_API == "true" ]]; then
echo "🤖 Run tests on API project..."
cd ./api/ && npm run tests
echo $TEST_API
BUILD_API="true"
export BUILD_API
else
echo "Don't test API"
fi
# ________________________________
# TODO: Complete for REACTAPP, similar to above
...
In my final script I use the BUILD_API variable to build to the dist folder, then I deploy that to either Elastic Beanstalk (for API) or S3 (for REACTAPP).
When I run this locally it works, however when I run it on Codebuild I get a permissions failure presumably because my bash scripts cannot export ENV_VAR. I'm wondering either if anyone knows how to update ENV_VARIABLES from within the build process itself, or if anyone has a better approach to achieve my goals (conditional/ variable build process on Codebuild)
EDIT:
So an approach that I've managed to get working is instead of using Env variables, I'm creating new files with specific names using fs then reading the contents of the file to make logical decisions. I can access these files from each of the bash scripts so it works pretty elegantly with some automatic cleanup.
I won't edit the original question as it's still an issue and I'd like to know how/ if other people solved this. I'm still playing around with how to actually use the eb deploy and s3 cli commands within the build scripts as codebuild does not seem to come with the eb cli installed and my .ebextensions file does not seem to be honoured.
Source control repos like Github can be configured to send a post event to an API endpoint when you push to a branch. You can consume this post request in lambda through API Gateway. This event data includes which files were modified with the commit. The lambda function can then process this event to figure out what to deploy. If you’re struggling with deploying to your servers from the codebuild container, you might want to try posting an artifact to s3 with an installable package and then have your server grab it from there.

How to execute a rake task using mina?

I want to run a rake task (migrate) contained in my Rakefile in my Sinatra app. I am using Mina to deploy. rake migrate works great if I run it on the server or on my development, but I cannot get Mina to execute the task.
My current deploy looks like this within config/deploy.rb
task :deploy => :environment do
deploy do
# Put things that will set up an empty directory into a fully set-up
# instance of your project.
invoke :'git:clone'
invoke :'deploy:link_shared_paths'
to :launch do
queue "sudo /opt/nginx/sbin/nginx -s reload"
end
end
end
I tried both queue "rake migrate" and queue "#{rake} migrate" within the deploy block and within the launch block but it always complains bash: command not found
in Mina, use ssh to execute rake not quite a smart move.
mina 'rake[rake_taks:taks_whatever_you_write]' on= environment
that is better.
Mina uses ssh to run remote commands. That means that the commands run in a different environment as when you log in. This is causing problems with rvm and rbenv as they are not initialised properly. Luckily, mina has rvm support, you just have to set it up:
require 'mina/rvm'
task :environment do
invoke :'rvm:use[ruby-1.9.3-p125#gemset_name]'
end
task :deploy => :environment do
...
end
You can do a similar thing for rbenv (documentation)

Multiple schedule.rb files with whenever gem

Is it possible to have multiple schedule.rb files when using the whenever gem in rails to setup cron jobs? I am looking to have a regular schedule.rb file and also a reports_schedule.rb file that is going to be deployed to a different server and it has its own specific reports environment.
How does whenever use the schedule.rb file? Is this possible?
It looks like it is possible if a bit ugly. Looking at the source code on job_list.rb:25 whenever just does an instance eval. So you can do something like the following.
schedule.rb
#Load reporting schedules
instance_eval(File.read('reporting_schedule.rb'), 'reporting_schedule.rb')
# All your regular jobs
# ...
reporting_schedule.rb
#Need some way to know if you are on the reporting server
if `hostname` =~ /reporting_server/
# All your reporting jobs
# ...
end
Worked for me doing some quick tests with the whenever command. Have not tried using it in a deploy though.
Not sure whether this was added after the question was asked, but you seem to be able to do role-based scheduling now: See README at https://github.com/javan/whenever

Capistrano deploying to different servers with different authentication methods

I need to deploy to 2 different server and these 2 servers have different authentication methods (one is my university's server and the other is an amazon web server AWS)
I already have running capistrano for my university's server, but I don't know how to add the deployment to AWS since for this one I need to add ssh options for example to user the .pem file, like this:
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")]
ssh_options[:forward_agent] = true
I have browsed starckoverflow and no post mention about how to deal with different authentication methods this and this
I found a post that talks about 2 different keys, but this one refers to a server and a git, both usings different pem files. This is not the case.
I got to this tutorial, but couldn't find what I need.
I don't know if this is relevant for what I am asking: I am working on a rails app with ruby 1.9.2p290 and rails 3.0.10 and I am using an svn repository
Please any help os welcome. Thanks a lot
You need to use capistrano multi-stage. There is a gem that does this or you could just include an environments or stage file directly into the capfile.
You will not be able to deploy to these environments at the same time, but you could sequentially.
desc "deploy to dev environment"
task :dev do
set :stage_name, "dev"
set :user, "dev"
set :deploy_to, "/usr/applications/dev"
role :app, "10.1.1.1"
end
desc "deploy to aws environment"
task :aws do
set :stage_name, "aws"
set :user, "aws"
set :deploy_to, "/usr/applications/aws"
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")]
ssh_options[:forward_agent] = true
role :app, "10.2.2.2"
end
You would run:
cap dev deploy; cap aws deploy
You can expand this complexity to open VPNS, users, gateways, etc.

Resources