How to run custom command on rundeck host in Rundeck? - ansible

I'm creating Rundeck job and I want to run a custom command (e.g. git pull) on Rundeck host itself within a job working on other nodes. I can see the Command or Script node steps, but is there a matching workflow step?
Context
I'm pretty new to Rundeck, so here's some context on what I'm trying to achieve, maybe my whole design is wrong. I'm pretty familiar with Ansible though and I'm trying to integrate it with Rundeck to treat Rundeck as an executor of Ansible scripts
We're developing some software product, which is on-prem solution and is quite complex to install (requires deep OS configuration). We want to develop it in Continous Delivery fasion, as our cloud products are. So in git repository, along the product, we keep Ansible workspace (playbooks, roles, requirements, custom tasks - everything exept inventory) and on every commit Ansible workspace should be compatible with particular product version.
My current approach is following: build pipeline publishes as artifacts both build of the product and zipped Ansible workspace. Whenever we want to deploy it, we would run Rundeck job, which:
downloads Ansible workspace from artifacts (alternative idea: pulls repository in proper commit)
runs Ansible playbook (via Ansible workflow step), which does the stuff on selected nodes
How can I perform this first step? From what I can see I can run script or command on nodes (but in particular job run nodes are the target machines, not rundeck host). There is also SCM git plugin for Rundeck, but it can load jobs from repository, not Ansible workspace

A good approach for that:
Integrate Rundeck with Ansible following this (consider that Rundeck and Ansible must coexist in the same server).
Create a new job, by default new Rundeck jobs are configured to run locally.
In The first step you can add a "script step" (inline script) moving to the desired directory and the git pull command (also, you can use this approach to cloning the repo if you need that).
Now, the next step could be the execution of your ansible playbook (Ansible Playbook Workflow Node Step) in your job.

Related

Is there a way to trigger webhook on scheduled basis in GitLab CI?

I have an Ansible project in GitLab that runs playbook on schedule via GitLab CI Runner.
I want to move playbook execution from runner to AWX (Ansible Tower). AWX supports triggering job template via webhook from GitLab, but in GitLab I can't find a way to trigger webhook via schedule.
Is there a way to trigger webhhok on schedule in GitLab CI?
The fine manual shows that every pipeline can have schedules, as a cursory search would also have revealed
Why don't you use the Scheduling feature including in AWX? It doesn't make sense to run a GitLab CI job that does nothing else just triggers a Job in AWX.
If your use-case is more complex and you must use the schedule in GitLab you can use something like this:
stages:
- deploy
deploy:
stage: deploy
image: your_docker_image_containing_awx_cli
script:
- awx --conf.host $AWX_HOST --conf.token $AWX_TOKEN -f human job_templates launch 'Your_deployment_job_name' --monitor --filter status
You can either prepare your own container image with AWX CLI or use some Python image and install it each time as part of the script in job definition.

Does azure pipeline 'command line' agent job inherit working directory from the previous job?

My understanding about azure pipelines agent jobs was that:
Each job is independent
And that each 'command line' job runs in its own context with an independent scope.
But if the working directory of azure pipeline 'command line' is not set, then it defaults to the working directory from previous 'command line' agent job.
Before I answer the question I want to make sure the terminology is clear:
pipelines are the overall definition of your ci cd process, they can contain multiple stages.
stages are phases of your pipeline, like build, test, deploy... They can contain multiple jobs.
jobs are collections of tasks/steps needed to implement your process. They contain one or more task/step.
tasks or steps are the actual actions being executed like "execute this command" "build that dotnet project"...
The environment is reset between each job (meaning a new virtual machine will be used, sources pulled again etc.). Between each task or step that belong to the same job, you will keep the same environment and each task will "benefit" the outcome (files changed, environment variables...) From the previous ones.
In terms of working directory, they all default to the build.workingDirectory (see azure devops default variables).
If you set the working directory of one task to something different, it will not impact other tasks.
If you use Microsoft-hosted agent, each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use. Each job may use different agents, you should not assume that the state from an earlier job is available during subsequent.
And follow is a simple test about it.
I create two agent job in my pipeline, and add the command task and run the agent job one by one. In first command task, I create a .txt file in $( Agent.BuildDirectory) folder and then read it.
In the second command task, I just changed to the folder and try to read the .txt file.
At last, the second task failed and show me the error message.
If I set working directory in first task, and not set it in second task. The two tasks’ working directory is different.

Do I need "shell" runner if I want to deploy gitlab + ansible on the same box?

I've have seen a gitlab demo about using Infrastructure as code. If I wanted to do the same on-prem, would this work? Setup open-source gitlab on-prem, setup a bash as shell runner, and using the shell runner, execute ansible playbooks on the network equipment?
having written out this answer, I now believe this question is at risk of closure for being either Too Broad or Primarily Opinion Based; but, I already spent the effort to type it out, so here we go.
Setup open-source gitlab on-prem, setup a bash as shell runner, and using the shell runner, execute ansible playbooks on the network equipment?
I believe that's possible, or you can install ansible, along with any required python modules for your playbooks, into a docker image and then use the docker executor to run the playbooks inside a container. Using Tower or AWX is also possible, since they have the concept of projects run from source control
The advantage of using the docker runner is that you don't have to pre-install ansible (along with its dependencies) on every runner host; the disadvantage of using the docker runner is that I could imagine ssh authentication from inside the container getting weird.
# hypothetical .gitlab-ci.yml
stages:
- apply
run ansible playbook:
stage: apply
image: docker.example.com/my-ansible:2.8
scripts:
- ansible-playbook -i ./some-inventory -v playbook.yml
The advantage of using a dedicated system like AWX (or Tower) is that the inventory against which those playbooks run is also a formally managed entity in the system, and wouldn't require teaching GitLab about how to make that available to your playbook. Same story with the authentication, since AWX has first-class support for a managed SSH keypair that can be conditionally granted to only certain playbook projects
You can still have GitLab integrate with Tower by either using tower-cli or their rich API to launch a Job Template that has its Project configured to do an SCM update before launch

Is it possible to run a playbook in "pull mode"?

I have playbooks which I start on a master host and which run specific actions on remote hosts. This is a "push" mode - the activity is initiated by the master host.
Several of my hosts are down at a given time and obviously fail to run the playbook when in such a state. This leads to hosts which are up-to-date, while others are not.
To fix this I could run the playbook on the master host in a regular way (via cron for instance) but this is not particularly effective.
Is there a built-in way in Ansible to reverse the flow, that is to initiate from a remote host a playbook available on the master, to run it on that remote host?
I can imagine that the remote host could ssh to the master (at boot time for instance) and then trigger the playbook with the host as a parameter (or something around that idea) but I would definitely prefer to use an Ansible feature instead of reinventing it.
There is a script called ansible-pull which reverses the default push architecture of Ansible. There is also an example playbook from the Ansible developers available.
The use of ansible-pull mode is really simple and straight-forward, this example might help you:
ansible-pull -d /root/playbooks -i 'localhost,' -U git#bitbucket.org:arbabnazar/pull-test.git --accept-host-key
options detail:
1. --accept-host-key: adds the hostkey for the repo url if not already added
2. -U: URL of the playbook repository
3. -d: directory to checkout repository to
4. –i localhost,: This option indicates the inventory that needs to be considered.
Since we're only concerned about one host, we use -i localhost,.
For detail, please refer to this tutorial

Running playbooks automatically

I am learning ansible recently and I am a hard time figuring out, how to configure ansible to run the playbooks on its own after a certain interval. ? Just like puppet does.
Ansible works in a different way compared to Puppet.
Puppet PULLS for configuration changes from a central place and applies changes on the remote host that asked for it.
Ansible by design works different. You PUSH the changes (from any control machine that has SSH access to remote hosts - usually your own computer) to remote hosts.
You can make Ansible work in pull mode also but it's not how Ansible was designed to be used.
You can see this answer for more information: Can't run Ansible in daemon-mode
If you would like the host to automatically run playbooks on itself (localhost) you would basically use ansible-pull script + crontab.
If you want to run the playbooks once after a certain interval, you can use the at command.
Example
# Schedule a command to execute in 20 minutes as root.
- at: command="ls -d / > /dev/null" count=20 units="minutes"
Further information available on ansible official site.
This is what Ansible Tower is for. It'll run after being pinged on its API, by schedule, manually, and so on.

Resources