I've managed to set up a minimal Ansible playbook to execute some scripts on my machines:
- name: Execute CLI on remote servers
hosts: webserver
tasks:
- name: Get metrics
shell: /home/user1/bin/cli.sh --file=script.cli
The only issue is that this relies on the filesystem to store the scripts. I'd like to store my script in a repository (such as git) and pass a reference to it as argument to the shell.Something like:
shell: /home/user1/bin/cli.sh --file=ssh://git#github.com/mylogin/script.cli
Any suggestion is highly appreciated!
Not a very elegant solution, but you can use the Ansible git module (http://docs.ansible.com/ansible/git_module.html) to first clone the repository that contains your scripts on your target machine(s) (webserver) and then reference those files from the shell module.
Related
I just started learning about NetDevOps.
All the examples containing demos of GitLAb + Ansible show how to execute "deploy.yml" from Ansible from the .gitlab-ci.yml.
However, when I see a general network equipment based Ansible tutorial the author executes different Ansible cookbook .yml files, for example sites.yml from the root, deploy.yml from another subfolder, interfaces.yml from another subfolder.
Can someone give me an example of how I would execute the different Ansible playbook .yml files on demand?
I.e. - When it detects changes in .yml in a folder, run that .yml file under that folder?
lets say you have an ansible repo and a .gitlab-ci.yml in the root of it.
.gitlab-ci.yml
ansible/cookbook1
ansible/cookbook2
For each cookbook you could create a Gitlab job
execute_cookbook1:
script:
- ...ANSIBLE COMMANDS...
...
only:
changes:
- ansible/cookbook1/**/*.yml
execute_cookbook2:
script:
- ...ANSIBLE COMMANDS...
...
only:
changes:
- ansible/cookbook2/**/*.yml
This way when you push your code to ansible repo, Gitlab will detect which cookbook changed. And run the according job
You can use in your GitLab job complex rules like if, changes, and exists, in the same rule.
The rule evaluates to true only when all included keywords evaluate to true.
For example:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: '$VAR == "string value"'
changes: # Include the job and set to when:manual if any of the follow paths match a modified file.
- Dockerfile
- docker/scripts/*
when: manual
allow_failure: true
In your case, keep when: manual and changes: *.yml in order to trigger the job manually, if yml Ansible playbook are changed.
my project builds under windows and linux.I have setup a gitlab-runner on windows and one on a linux machine. Now I want to configure the ".gitlab-ci.yml" for building on both machines. BUT, depending on the operating system, I'd like to call a different build script for the build.
Example ".gitlab-ci.yaml" (not working)
mybuild:
# on linux
script:
- ./build-linux.sh
# on windows
script
- buildwin.bat
How can i achieve this in the .gitlab-ci.yml?
You can't. The way to achieve it is to
give your runners unique tags. e.g. "linux-runner" and "windows-runner"
duplicate the job and run one job only on runners with the tag "linux-runner" and the second job only on runners with the "windows-runner" tag.
linux build:
stage: build
tags:
- linux-runner
script:
- ./build-linux.sh
windows build:
stage: build
tags:
- windows-runner
script:
- buildwin.bat
See also https://stackoverflow.com/a/49199201/2779972
The solution generally suggested to create two jobs doesn't fit my needs. My need is to be able to use a Windows or on a Linux/MacOS runner, whatever is the one available.
My suggested trick is to create a call script in /usr/local/bin so it can mimic the Windows call command:
#/bin/bash
./$*
If you want to invoke Gradle wrapper for example, you can simply write in the gitlab-ci.yml:
script:
- call gradle
it also works with a specific script (for instance "build.bat" for Windows, and "build" for MacOS/Linux):
script:
- call build
I hope that will help someone with the same need as me.
This solution works similar to what #christophe-moine suggests, but without the need for creating a call script or alias.
Provided that your Windows CI runner runs Windows PowerShell (which is likely), you may simply create two scripts, e.g.
buildmyapp (for Linux – note the missing extension!)
buildmyapp.cmd (for Windows)
... and then execute them in GitLab CI using the Unix-style syntax, without the script extension, e.g.
mybuild:
script:
- ./buildmyapp
parallel:
matrix:
- PLATFORM: [linux, windows]
tags:
- ${PLATFORM}
In the script: block, Windows PowerShell will pick buildmyapp.cmd on the Windows runner, the Linux shell with pick the buildmyapp script on the Linux runner.
The parallel: matrix: keyword in combination with tags: creates two parallel jobs that pick your CI runners via the tags keyword.
I'm working on a playbook to upload a configuration file to remote servers, but the remote servers do not have python installed (which is a requirement for using modules). I have successfully written other playbooks using the raw feature to avoid having to install python on the servers, but I can't find any examples in the Ansible documentation to perform a file upload using bare-bones ssh. Is a non-module based upload possible?
No sure why do you use Ansible this way, but you can make a local task with scp:
- name: remote task
raw: echo remote
- name: local scp
local_action: command scp /path/to/localfile {{ inventory_hostname }}:/path/to/remotefile
- name: remote task
raw: cat /path/to/remotefile
I usually check and install python with raw module and continue with Ansible core modules.
This answer may not always be applicable, but as long as you are allowed to put the files on some kind of Web or so server, and as long as curl or wget or similar are installed on the remote system, you can use those tools to download your files within the raw block.
I'm using an Ansible playbook to copy files between my host and a server. The thing is, I have to run the script repeatedly in order to upload some updates. At the beginning I was using the "copy" module of Ansible, but to improve performance of the synchronizing of files and directories, I've now switched to use the "synchronize" module. That way I can ensure Ansible uses rsync instead of sftp or scp.
With the "copy" module, I was able to specify the file's mode in the destination host by adding the mode option (e.g. mode=644). I want to do that using synchronize, but it only has the perms option that accepts yes or no as values.
Is there a way to specify the file's mode using "synchronize", without having to inherit it?
Thx!
Finally I solved it using rsync_opts
- name: sync file
synchronize:
src: file.py
dest: /home/myuser/file.py
rsync_opts:
- "--chmod=F644"
I'm working with Ansible using ansible-pull (runs on cron).
Can I install Ansible role from Ansible Galaxy without login in to all computers (just by adding a command to my Ansible playbook)?
If I understand you correctly, you're trying to download and install roles from Ansible Galaxy from the command line, in a hands-off manner, possibly repeatedly (via cron). If this is the case, here's how you can do it.
# download the roles
ansible-galaxy install --ignore-errors f500.elasticsearch groover.packerio
# run ansible-playbook to install the roles downloaded from Ansible Galaxy
ansible-playbook -i localhost, -c local <(echo -e '- hosts: localhost\n roles:\n - { role: f500.elasticsearch, elasticsearch_cluster_name: "my elasticsearch cluster" }\n - { role: groover.packerio, packerio_version: 0.6.1 }\n')
Explanation / FYI:
To download roles from Ansible Galaxy, use ansible-galaxy, not ansible-pull. For details, see the manual. You can download multiple roles at once.
If the role had been downloaded previously, repeated attempts at downloading using ansible-galaxy install will result in an error. If you wish to call this command repeatedly (e.g. from cron), use --ignore-errors (skip the role and move on to the next item) or --force (force overwrite) to work around this.
When running ansible-playbook, we can avoid having to create an inventory file using -i localhost, (the comma at the end signals that we're providing a list, not a file).
-c local (same as --connection=local) means that we won't be connecting remotely but will execute commands on the localhost.
<() functionality is process substitution. The output of the command appears as a file, so we can feed a "virtual playbook file" into the ansible-playbook command without saving the playbook to the disk first (e.g., playbookname.yml).
As shown, it's possible to embed role variables, such as packerio_version: 0.6.1 and apply multiple roles in a single command.
Note that whitespace is significant in playbooks (they are YAML files). Just as in Python code, be careful about indentation. It's easy to make typos in long lines with echo -e and \n (newlines).
You can run updates of roles from Ansible Galaxy and ansible-playbook separately.
With a bit of magic, you don't have to create inventory files or playbooks (this can be useful sometimes). The solution to install Galaxy roles remotely via push is less hacky / cleaner but if you prefer to use cron and pulling then this can help.
I usually add roles from galaxy as submodules in my own repository; that way I have control over when I update them, and ansible-pull will automatically fetch them - removing the need to run ansible-galaxy.
E.g.:
mkdir roles
git submodule add https://github.com/groover/ansible-role-packerio roles/groover.packerio
Yes you can.
# install Ansible Galaxy requirements via the pull playbook itself
- hosts: localhost
tasks:
- command: ansible-galaxy install -r requirements.yml