Capistrano: create tasks with roles did not work - ruby

I try to create tasks with different roles :
namespace :foo do
task :mytasks, :roles => [:a, :b,] do
task_a
task_b
end
task :task_a, :roles => :a do
run 'echo A'
end
task :task_b, :roles => :b do
run 'echo B'
end
end
When i execute 'mytasks' here is the result :
$ cap -n ROLES=b foo:mytasks
* 2013-03-01 16:59:14 executing `foo:mytasks'
* executing "echo A"
* executing "echo B"
All tasks get executed, why ?

Capistrano Roles are intended to associate a given server (or multiple servers) with a particular function, such as saying "machine-a" is a web server while "machine-b" is a database server, which is useful because certain tasks only need to be performed on certain machines.
So roles are not intended to be a way to conditionally select which machine(s) to run tasks on at the time when you are running Capistrano, they simply select which tasks should be run on which machines.
There is, however, another Capistrano feature called Multistage that may be what you're looking for. It allows you to specify different sets of servers (and even associate them with different roles) based on the "stage" you're deploying to. So you could have a and b stages, each with separate sets of servers, which you could deploy using:
cap a foo:mytasks
cap b foo:mytasks

Related

Dependency and condition oder in azure DevOps Pipeline

In Azure pipeline yaml file, when defining multiple jobs in a single stage, one can specify dependencies between them. One can also specify the conditions under which each job runs.
Code #1
jobs:
- job: A
steps:
- script: echo hello
- job: B
dependsOn: A
condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))
steps:
- script: echo this only runs for master
Code #2
jobs:
- job: A
steps:
- script: "echo ##vso[task.setvariable variable=skipsubsequent;isOutput=true]false"
name: printvar
- job: B
condition: and(succeeded(), ne(dependencies.A.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
steps:
- script: echo hello from B
Question:
Code #1 & #2 above have different orders of the dependency and condition. Does the order matters? If so, what's matter? (what's the difference between different orders)
Discuss 1 and 2 separately.
Code #1:
Since there is no data connection between job1 and job2, data connection here refers to variable sharing and etc.
So, for #1, there's no matters on order. Here you can ignore the dependsOn specified while you have no special requirements on the execution order between job A and job B.
BUT, there's one key thing you need pay attention is, the actual running order will be changed randomly when you do not specify the dependsOn. For example, most of time, they will respect with the order job A, job B. Occasionally, they will randomly run as job B, job A.
Code #2:
This must make the dependsOn specified. Because your job B is using the output variable which created/generated at job A. Since our system allow same variables name exists in different jobs, you must specify the dependsOn so that the system can know the job B should find the variable skipsubsequent from job A not others. Only this key words specified, the variables which generated in job A can be exposed and available to next jobs.
So, the nutshell is once there is any data connection between jobs, e.g variable pass, you must specify dependsOn to make the jobs has connection with each other.

Run Ansible playbook on UNIQUE user/host combination

I've been trying to implement Ansible in our team to manage different kinds of application things such as configuration files for products and applications, the distribution of maintenance scripts, ...
We don't like to work with "hostnames" in our team because we have 300+ of them with meaningless names. Therefor, I started out creating aliases for them in the Ansible hosts file like:
[bpm-i]
bpm-app1-i1 ansible_user=bpmadmin ansible_host=el1001.bc
bpm-app1-i2 ansible_user=bpmadmin ansible_host=el1003.bc
[bpm-u]
bpm-app1-u1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-u2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-all:children]
bpm-i
bpm-u
Meaning we have a BPM application named "app1" and it's deployed on two hosts in integration-testing and on two hosts in user-acceptance-testing. So far so good. Now I can run an Ansible playbook to (for example) setup the SSH accesses (authorized_keys) for team members or push a maintenance script. I can run those PBs on each host seperately, on all hosts ITT or UAT or even everywhere.
But, typically, we'll have install the same application app1 again on an existing host but with a different purpose - say "training" environment. My reflex would be to do this:
[bpm-i]
bpm-app1-i1 ansible_user=bpmadmin ansible_host=el1001.bc
bpm-app1-i2 ansible_user=bpmadmin ansible_host=el1003.bc
[bpm-u]
bpm-app1-u1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-u2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-t]
bpm-app1-t1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-t2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-all:children]
bpm-i
bpm-u
bpm-t
But ... running PB's becomes a mess now and cause errors. Logically I have two alias names to reach the same user/host combination : bpm-app1-u1 and bpm-app1-t1. I don't mind, that's perfectly logical, but if I were to test a new maintenance script, I would first push it to bpm-app1-i1 for testing and when ok, I probably would run the PB against bpm-all. But because of the non-unique user/host combinations for some aliases the PB would run multiple times on the same user/host. Depending on the actions in the PB this may work coincidentally, but it may also fail horribly.
Is there no way to tell Ansible "Run on ALL - UNIQUE user/host combinations" ?
Since most tasks change something on the remote host, you could use Conditionals to check for that change on the host before running.
For example, if your playbook has a task to run a script that creates a file on the remote host, you can add a when clause to "skip the task if file exists" and check for the existence of that file with a stat task before that one.
- Check whether script has run in previous instance by looking for file
stat: path=/path/to/something
register: something
- name: Run Script when file above does not exist
command: bash myscript.sh
when: not something.exists

How to print capistrano current thread hash?

An example output from capistrano:
INFO [94db8027] Running /usr/bin/env uptime on leehambley#example.com:22
DEBUG [94db8027] Command: /usr/bin/env uptime
DEBUG [94db8027] 17:11:17 up 50 days, 22:31, 1 user, load average: 0.02, 0.02, 0.05
INFO [94db8027] Finished in 0.435 seconds command successful.
As you can see, each line starts with "{type} {hash}". I assume the hash is some unique identifier for either the server or the running thread, as I've noticed if I run capistrano over several servers, each one has it's own distinct hash.
My question is, how do I get this value? I want to manually output some message during execution, and I want to be able to match my output, with the server that triggered it.
Something like: puts "DEBUG ["+????+"] Something happened!"
What do I put in the ???? there? Or is there another, built in way to output messages like this?
For reference, I am using Capistrano Version: 3.2.1 (Rake Version: 10.3.2)
This hash is a command uuid. It is tied not to the server but to a specific command that is currently run.
If all you want is to distinguish between servers you may try the following
task :some_task do
on roles(:app) do |host|
debug "[#{host.hostname}:#{host.port}] something happened"
end
end

Rake equivalent of make -j (--jobs)

The make commands allows a -j (--jobs) options documented as such:
-j [jobs], --jobs[=jobs]
Specifies the number of jobs (commands) to run simultaneously. If there is more than one -j option,
the last one is effective. If the -j option is given without an argument, make will not limit the
number of jobs that can run simultaneously.
In a day and age where even cell phones have multiple cores and/or processors, I want my build systems to handle multithreaded processing.
What is the best way to set up rake so I can ensure up to 3 tasks are running at all times?
Yes, rake allows the jobs to run in parallel. To set up the level of parallelism, use -j switch. From rake --help:
-j, --jobs [NUMBER] Specifies the maximum number of tasks to execute in parallel. (default is number of CPU cores + 4)
But, the job itself must be written as a multitask, not a task. So the instead of defining the task like:
namespace :mynamespace do
desc "description"
task task_name: :environment do
your_code
end
end
use multitask:
namespace :mynamespace do
desc "description"
multitask task_name: :environment do
your_code
end
end
There is a blog post about rake MultiTask, but it supports the -j parameter as -m for parallelization.

Issue with :hosts

I have a Capfile for Multistage deploys that needs to deploy the code to one server (NFS) and finally restart several application servers. Roles can therefore not be used easily since application servers do not need to be used for deploy:update_code. I have come up with something that might work, but have a issue that needs to be resolved.
application_servers = nil
task :production do
role :nfs, "nfs.someserver.net"
application_servers = "app.someserver.net"
end
task :staging do
role :nfs, "nfs-staging.someserver.net"
application_servers = "app-staging.someserver.net"
end
desc "tail resin logs #{resin_logs}"
task :tail, :hosts => application_servers do
puts("Server is:"#{application_servers})
stream "tail -f #{resin_logs}"
end
And when running:
#$ cap staging tail
* executing `staging'
* executing `tail'
Server is:app-staging.someserver.net
* executing "tail -f /log/resin/*.log"
servers: ["nfs-staging.someserver.net"]
[nfs-staging.someserver.net] executing command
tail: cannot open `/log/resin/*.log' for reading: No such file or directory
tail: no files remaining
command finished
failed: "sh -c 'tail -f /log/resin/*.log'" on nfs-staging.someserver.net
When printing value of application_servers in task tail it says "app-staging.someserver.net", but the value used in :hosts => application_servers is empty (which is why it uses the role nfs instead).
Why does the variable application_server have two different values? Is it scope issue? I have tried with global ($) and that does not work as well.
Solved the issue just by changing from using :hosts to :roles on application specific task and added a new role. The key feature is to use no_release so that the code is not deployed to application servers. We only want to restart resin instance on those machines.
task :production do
role :nfs, "nfs.someserver.net"
role :application, "app.someserver.net", :no_release => true;
end
task :staging do
role :nfs, "nfs-staging.someserver.net"
role :application, "app-staging.someserver.net", :no_release => true;
end
desc "tail resin logs #{resin_logs}"
task :tail, :roles => application_servers do
puts("Server is:"#{application_servers})
stream "tail -f #{resin_logs}"
end

Resources