I am developing an a Ansible module for compile sources, like
- source_compile:
archive: /var/cache/sources/nginx.tar.gz
configure:
prefix: /usr
I will probably
Check source package. (If is url download it)
Make a unique build directory.
Unarchive source to build directory.
configure && make && make install
So I want use ansible core module get_url and unarchive and shell, But no found how.
You can execute other modules only from action plugin, not from module itself.
It's done with _execute_module helper function. See template action for example.
Also you may be interested in using other helper functions such as fetch_url to retrieve remote data and _low_level_execute_command to run shell commands instead of calling other modules.
I'd recommend inspecting Ansible core modules/actions sources code to get the idea of how things work.
Related
Currently I am working on an API which uses Serverless Framework with Go.
I'm using the Serverless-offline plugin for local testing.
This API depends on a few other repositories (which I also maintain), which I import using the go.mod file.
However I am having a hard time refining my developer workflow.
Currently, if I want to make changes in a repository which this API depends upon, I have to alter the projects go.mod to include replace directives for the purpose of testing, but then I'm having to manually change it back for deployment to production.
Basically I'm looking for a way to include replace directives, which only get applied during local development. How has everyone else dealt with this problem?
Bonus question: Is there any way to run Serverless offline in docker? I'm finding that serverless-offline running on the bare metal is causing inconsistencies between different developers environments.
You can run go commands with an alternate go.mod file with the -modfile option:
From Build commands:
The -modfile=file.mod flag instructs the go command to read (and
possibly write) an alternate file instead of go.mod in the module root
directory. The file’s name must end with .mod. A file named go.mod
must still be present in order to determine the module root directory,
but it is not accessed. When -modfile is specified, an alternate
go.sum file is also used: its path is derived from the -modfile flag
by trimming the .mod extension and appending .sum.
Create a local.go.mod file with the necessary replace directive for development and build, for example, with:
go build -modfile=local.go.mod ./...
My current playbook is structured this way
projectroot
|
|--ubuntu2004
|
|--00_setup
|
|--vars
|--playbook.yml
|--readme.md
Because my playbook uses ansible.posix and I also commit my playbook into a github repo. I was hoping if there's a way to include the required collection in this case ansible.posix as a requirement and how do I install it?
I saw that there are multiple ways https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#installing-collections
I was wondering what's the best practice way that makes sense when using a github repository as version control for the playbook?
There's a few ways that you could do this. I'd suggest a requirements file since it's the easiest to set up and to manage.
Create a requirements file that you use to install the required modules.
The best way would be to create a requirements file that references the collection(s) your playbook needs. Which you can then use to install the required collection(s) and / or role(s).
---
collections:
- name: ansible.posix
Store the module in a repository and install it through ansible-galaxy.
You could upload the module to a git repository and then install it. I wouldn't recommend this as storing dependencies in your source-code isn't considered a good practise when there's a tool to manage dependencies available.
ansible-galaxy collection install
git+https://github.com/organization/repo_name.git,devel
Install the module through a playbook
I've set up a master node to run playbooks on with Ansible before and installed modules through the command task in a playbook. As long as you don't reference / include any tasks or plays that use the module, this will work fine.
- name: Install ansible posix module
command: ansible-galaxy collection install ansible.posix
The ideal way would be to have to playbook install the required module(s) prior to executing the tasks. However it's not possible to ignore the error that is thrown when a module is missing, so you would have to have to create a play that doesn't include / reference any of the tasks using the module.
We have a set of Ansible modules on GitHub (https://github.com/zhmcclient/zhmc-ansible-modules) and can generate HTML documentation from it using Sphinx. However, the build process includes a step where a documentationgenerator tool from Ansible is run to generate .rst files from the Python module source.
We have set up an RTD project for this (http://zhmc-ansible-modules.readthedocs.io/), but that extra step is not run there, of course.
-> How can we get that extra step run within the build process that runs on RTD?
RTD does not support intermediary steps in its build process. You must provide source files in your repository that are ready to be rendered. See RTD Build Process.
I have a playbook and a bunch of modules I wrote.
Now I want to reuse the same modules in my next playbook for a different project.
I really want to push those modules to a public git repository and then somehow tell ansible to use the modules from the git repository.
(kinda like npm package.json referencing github)
I can't seem to find any documentation on how to do that.
For now, I am using a workaround where I tell people to npm install the repository, and then define ANSIBLE_LIBRARY variable.
How can I tell the playbook to load modules from a github repository or some other remote location?
Actually modules can be nested inside roles since quite a long time. Since Ansible 2 this even is possible with most of the plugins.
The folders where the modules & plugins need to be stored inside the role is the same as on playbook level. Modules go into library, plugins go into *_plugins (action_plugins, callback_plugins, filter_plugins etc)
To make the module/plugin then available the role has to be applied to the playbook (or added as a dependency of another role)
Only exception known to me are variable plugins and that perfectly makes sense. Since variable plugins are executed when the inventory is read, which happens before roles are interpreted.
vars_plugins still can be distributed in roles, but the path needs to be added in the ansible.cfg. Fortunately you can also use wildcards in paths:
vars_plugins = roles/*/vars_plugins
And no, all of this is not documented in any way. :)
Finally, to distribute roles you can use Ansible Galaxy:
ansible-galaxy install foo
Nothing wrong with directly using git. Ansible Galaxy actually only is a tool to install git repositories. But since Galaxy is the Ansible standard I suggest to at least provide a Galaxy compatible format. A good (best?) practice how to install Galaxy roles separate from the projects roles can be found here.
Here's an example for an action plugin: https://galaxy.ansible.com/udondan/ssh-reconnect/
There is no solution for that at the moment. Maybe you can add a playbook to download the modules to your project to avoid npm, but thats even not that nice.
I have my custom modules in a directory next to my playbooks. This directory is defined in my global ansible.cfg file:
library = /usr/share/ansible
The only drawbag here is that i allways have the same version for modules on all playbboks.
My app uses Mochiweb.
I have noticed that Mochiweb files reside in the myapp/deps/mochiweb directory and rebar compiles them when I run make in the myapp directory.
I wanted to add ibrowse to write a few tests which make http requests to my app. So I cloned ibrowse from github to myapp/deps/ibrowse directory.
But it seems that Erlang does not know where to get the .beam files for ibrowse and therefore all my tests that use the ibrowse module fail:
myapp
ebin %%compiled tests reside here, tests which use ibrowse fail (badarg)
deps
mochiweb
ibrowse
ebin %%compiled ibrowse module resides here
src
tests
How can I make my Mochiweb-based app use other Erlang/OTP external libraries?
Should I edit rebar.config or Makefile for that? Or maybe I should edit an _app.src file?
Edit: Maybe I should edit the list of directories in the myapp_sup.erl file? (myapp_deps:local_path(["priv", "www"])
P.S. How does my app know where all the mochiweb.beam files reside? (for example, the generic myapp_web.erl uses a call to mochiweb_http module, but there is no mochiweb_http.beam in the myapp/ebin directory).
Dependencies in rebar are added via the rebar.config file:
%% What dependencies we have, dependencies can be of 3 forms, an application
%% name as an atom, eg. mochiweb, a name and a version (from the .app file), or
%% an application name, a version and the SCM details on how to fetch it (SCM
%% type, location and revision). Rebar currently supports git, hg, bzr and svn.
{deps, [application_name,
{application_name, "1.0.*"},
{application_name, "1.0.*",
{git, "git://github.com/basho/rebar.git", {branch, "master"}}}]}.
Then, you probably want to look at Erlang releases and release handling with rebar. Think to a release as a way of grouping applications.
http://www.erlang.org/doc/design_principles/release_handling.html
http://learnyousomeerlang.com/release-is-the-word
https://github.com/basho/rebar/wiki/Release-handling
Adding the following code to myapp_web.erl solved my problem:
ibrowse:start()
By default Mochiweb is started in the same function:
mochiweb_http:start()...
I am not sure if it the proper way to do this, but it works.