How to populate deploy_to variable with custom server property - ruby

I've asked this question on Capistrano's GitHub repository issue tracker (https://github.com/capistrano/capistrano/issues/1750) and was told to ask the same question here.
I'm trying to populate the deploy_to variable with a custom server property (named organisation) to deploy the same application multiple times to the same server.
set :deploy_to, "/home/deploy/sites/#{server.properties.organisation}"
It seems impossible to load the server Array? using the fetch() method.

I've done a couple different things for this case. If each installation is indeed identical, I'll deploy once and symlink the other installations. If each installation has different parameters, I'll create multiple targets (prod-1, prod-2, prod-2, et cetera) where each points at the same server. You can use helper methods to reduce code duplication. Then I'll write a script which runs bundle exec cap prod-1 deploy && bundle exec cap prod-2 deploy && ....

Related

Script that checks GCloud project before deploy

So today I basically fumbled a huge amount of traffic because I deployed a gcloud project while having the wrong project set. As you know, when we deploy to gcloud we have to make sure we choose the right project using gcloud config set project [PROJECT_NAME], unfortunately this is sometimes hard to remember to do as multiple projects require quick deployments to be sent out when bugs arise.
I was wondering if anyone had a good solution for this that runs a predeploy shell script which makes sure that you are deploying the right project when deploying.
Thanks in advance,
Nikita
#Ajordat's answer is good but I think there's a simpler solution.
If you unset the default project, then you'll be required to explicitly set --project=${PROJECT} on each gcloud command.
gcloud config unset project
Then, every gcloud command will require:
gcloud ... --project=${PROJECT} ...
This doesn't inhibit specifying an incorrect ${PROJECT} value in the commands but it does encourage a more considered approach.
A related approach is to define configurations (sets of properties) and to enable these before running commands. IMO this is problematic too and I recommend unsetting gcloud config properties and always being explicit.
See:
https://cloud.google.com/sdk/gcloud/reference/config/configurations
As seen in the documentation, the gcloud tool has some flags that are independent of the subcommand (app deploy) that is being used, such as --account, --verbosity or --log-http. One of these gcloud-wide flags is --project, which will let you specify the project you are deploying to.
Now, to deploy you must use a project, in your case since you have to deploy to different projects you must specify it somewhere. There's no workaround to specifying the project because the service must have somewhere to be deployed to. You have two options:
You can set the default project to the one you want to use:
gcloud config set project <project-name>
gcloud app deploy
Notice that you would have to set the project back again to the one you were previously using.
Or you can use the --project flag in order to specify where to deploy.
gcloud app deploy --project=<project-name>
I would recommend you to use the --project flag as it's more deployment specific and doesn't involve changing the default project. Moreover, you can keep working on the previous project ID since it doesn't change the default values, it's just a deploy to another project.
Notice that in the past you could specify the project on the app.yaml file with the application tag, but this was deprecated in favor of the --project flag.

Node (maven) to deploy the application to several environments

On Jelastic, I created a node for building an application (maven), there are several identical environments (NGINX + Spring Boot), the difference is in binding to its database and configured SSL.
The task is to ensure that after building the application (* .jar), deploy at the same time go to these several environments, how to implement it?
When editing a project, it is possible to specify only one environment, multi-selection is not provided.
it`s allowed to specify just one environment
We suggest creating a few environments using one Repository branch, and run updates by API https://docs.jelastic.com/api/#!/api/environment.Vcs-method-Update pushing whole code to VCS.
It's possible to use CloudScripting technology for attaching custom logic to onAfterBuildProject event and deploying the project to additional environments after build is complete. Please check this JPS as an example of the code syntax. Most likely you will need to use DeployProject API method.

CI based on docker-compose?

I am currently building a little application that requires that some massively annoying software is installed and running in backround. To ease the pain of developing, I wrote a set of docker-compose files that runs the necessary daemons, creates some jobs, and throws in some test data.
Now, I'd like to run this in a CI-like manner. I currently have Jenkins check all the different repositories and execute a shell script that calles docker-compose up --abort-on-container-exit. That gets the job done, but it seems like a hack, and I'm not such a huge fan of Jenkins.
What I want to ask is: is there a more beautiful way of doing this? Specifically, is there a CI that will
watch a set of git repositories,
re-execute docker-compose (possibly multiple times with different sets of parameters), and
nicely collect and split the logs and tell me which container exactly failed how?
(Optionally) is not some cloud service but installable on my local server?
If the answer to this is "write a Jenkins module", then fine, so be it.
I'm aware that there are options like gitlab-ci, but I'd like to keep the CI script in a fashion that can also be easily executed during development, before pusing to a repo.

How can Puppet fit into a Continuous Delivery tool chain?

I'm investigating Puppet as our future deployment and provisioning tool in our shop, but now I'm stuck at how to make a clever Continuous Integration/Delivery tool chain with deployment through Puppet.
In any of our environments (dev, test, qa, demo, prod) we have a range of components. We need to be able to deploy each component separately and possibly even concurrently.
I'd like a way to initiate (through script) a deploy of a single component package (=Puppet module) and gather the output and success status of that.
Simply waiting for a scheduled agent pull, or doing a 'puppet agent --test' on each node on the environment isn't good enough, because it may pick up other pending changes (I don't know if another component is also in the process of being deployed).
In my tool chain I would like the deployment output and status from component A and component B to be recorded separately and not mixed up.
So my question is: Can I use puppet to deploy one single named package (module) at a time?
And if not, where did I take a wrong turn when I drove down this path?
I realise a master-less Puppet set-up with modules and manifests replicated to each node perhaps could do it, but IMHO a master-less Puppet set-up kind of defeats the purpose of Puppet.
PS: I think what I'm trying to achieve is called 'Directed Orchestration' in Damon Edwards' very enlightening video at Integrating DevOps tools into a Service Delivery Platform (at timestamp around 22:30).
So my question is: Can I use puppet to deploy one single named package (module) at a time?
Yes, you can, via puppet apply. First you need to create a moduledir and a module that will contain your manifests. e.g. :
/scratch/user/puppet/local/ # This is your modulepath for local deployment
# Following contains the manifests for a module name "localmod"
/scratch/user/puppet/local/localmod/manifests/init.pp
# example content of init.pp
class localmod {
notify{"I am in in local module....":}
}
On that local machine you can test this module via puppet apply :
puppet apply -v --modulepath=/scratch/user/puppet/local -e "include localmod"
echo $? # Get the exit status of the above command
I watched the video at the point your video. There are two types of automation you can do.
Application build/deploy automation, which can be achieved via maven/ant (Build) and ant/capistrano/chrome/bash/msdeploy (Deploy) or as termed on that slide "Installer".
System/Infrastructure automation can be achieved via Chef/Puppet/CFEngine.
This question seems to be ... "How do I do applications build using puppet (implied as a system automation tool)"
So quite simply, oval tool in round hole. (I didn't say square)
At my company, we use Jenkins and the Build Pipeline Integration plugin to build massive multi component projects. As an example, a Java app will use ant in a build job, the next chained job will be a "deploy to dev" job which uses Capistrano to deploy the application, then the next job in the chain is "Configure Dev" which calls Chef to update the system configurations in the DEV environment. Chef is used to configure the application. Each of these jobs can be set to run automatically and sequentially.
a master-less Puppet set-up kind of defeats the purpose of Puppet.
Only if you discount
The rich DSL puppet has to offer
So many peer reviewed community modules
Otherwise, something like this gives you remote directed orchestration.
#update manifests etc (version control is the source of truth)
ssh user#host git pull
#run puppet
ssh user#host sudo puppet-apply

Capistrano deploy permission problem

I'm trying to use Capistrano 2.5.19 to deploy my Sinatra application. So far, I managed to successfully run deploy:setup, but when I try to perform the actual deployment or the check (deploy:check), Capistrano tells me that I don't have permission. I'm using sudo since I log in with my own user and the user used for deployment is called passenger and is member of the group www-data. Therefore is set :runner and :admin_runner to passenger. It seems, however, that Capistrano is not using sudo during the deployment, while it was definitively doing so during the setup (deploy:setup). Why is that? I thought that the user specified by the runner parameter is used for deployment.
Unfortunately, I cannot directly answer your questions, however, I would like to offer up a different solution, which is to take the time to properly set up ssh/rsa keys to accomplish what you want to do. This will allow you to both not worry about setting and changing users in addition to not having to worry about embedding authentication information inside your cap scripts.

Resources