Ansible "Permission denied" when trying to install/check a gem - ruby

I recently decided to switch my Ansible deployment to install Ruby via rbenv rather than from apt-get via ruby1.9.1. Now I'm getting an error when trying to install the gem via Ansible.
TASK: [nginx | s3cp gem] ******************************************************
failed: [staging.myapp.com] => {"cmd": ["/usr/local/bin", "query", "-n", "^s3cp$"], "failed": true, "item": "", "rc": 13}
msg: [Errno 13] Permission denied
FATAL: all hosts have already failed -- aborting
Ansible playbook entry for this command:
- name: s3cp gem
gem: name=s3cp state=present executable=/usr/local/bin
I have sudo set to "yes" in a higher-level call to this playbook part. So I am not sure why it's tripping up. I also am able to login with the same user used for Ansible and navigate to that directory and also install this gem.
It was working fine when I was using apt-get to install ruby1.9.1. Any ideas?
This is deployed to an Ubuntu 13.04 server, by the way.
MORE INFORMATION:
Apparently it's not just tripping up on s3cp. I skipped that one and went on to another command to install bundler. This command also would not work (failed in the same way). I am wondering if there's a default ruby that's conflicting with the rbenv ruby (though, which ruby when ssh'ed in is yielding the expected rbenv directory).
MORE-MORE INFO:
I tried to install ruby via rvm instead. I had the same error. :(

What happens when you run ansible with -vvvv? It should provide full verbose output of the tasks, hopefully including any errors that it encounters. With a bit of luck it will show you what the problem is.
Another thing to check is what user you're running the tasks as. How do you have the following parameters set at the top of your play (or do you not specify any of these)?
- hosts: myhosts
user: someuser
sudo: True
sudo_user: another_user

As far as I know, the gem ansible module is not rbenv-aware. This means that when you call the gem module, it will try to install a gem system wide. This, of course, will fail if you're not acting as root on your node.
To install a gem with rbenv, you must use rbenv's gem shim. The only way to do this is to be able to trigger rbenv init by sending the command thru bash :
- name: Install Bundler
command: bash -lc "gem install bundler"
This has been already adressed here :
Install Bundler gem using Ansible

Related

Ansible: Unable to run docker compose in an ansible playbook

I appear to be unable to run docker compose tasks in an ansible playbook. I get stuck in a loop.
The first error I get when running sudo ansible-playbook playbook.yml is the following
fatal: [10.0.3.5]: FAILED! => {"changed": false, "msg": "Unable to load docker-compose. Try `pip install docker-compose`. Error: No module named compose"}
so I remote to that machine and did sudo pip install docker-compose and try running the playbook again. This time I get...
fatal: [10.0.3.5]: FAILED! => {"changed": false, "msg": "Cannot have both the docker-py and docker python modules installed together as they use the same namespace and cause a corrupt installation. Please uninstall both packages, and re-install only the docker-py or docker python module"}
so I try uninstalling docker python...
sudo uninstall docker python
Then I get the following when attempting to run the playbook again
fatal: [10.0.3.5]: FAILED! => {"changed": false, "msg": "Failed to import docker-py - No module named docker. Try `pip install docker-py`"}
However this is already install on the machine, as when I run sudo pip install docker-py I see the following...
Requirement already satisfied (use --upgrade to upgrade): docker-py in /usr/local/lib/python2.7/dist-packages
Cleaning up...
Does anyone know how to escape this loop and successfully get an ansible playbook that uses docker-compose to run?
The machine os is linux 14.04
Thanks,
What worked for me was to first uninstall everything docker related in the virtualenv for Ansible.
pip uninstall docker docker-py docker-compose
And then install the docker-compose module, which will install the docker module as well as a dependency.
pip install docker-compose
The Ansible docker module will try to import docker, which will also succeed with the docker module, and as such not provide an error with the misleading instruction to install docker-py.
I had the just had the same error message while trying to run sudo ansible playbook-with-docker.yaml
{"changed": false, "msg": "Unable to load docker-compose. Try `pip install docker-compose`. Error: No module named compose"}
I took me about 2 hours to figure out that in Linux pip install is not the same as sudo pip install (quite obvious, once you know what's happening ).
So in case someone has the same issue - make sure you're you are running everything consistently either as sudo or not sudo, but don't mix stuff :)
...and use sudo pip list | grep docker to verify.
As already stated in other answers, docker-compose python module is missing.
You can install it manually as previous answers indicate or you can use a "pure" Ansible solution that is to install via a task.
For that, use ansible.builtin.pip to install docker-compose module (you can add more modules to install if needed in the same task).
- hosts: all
gather_facts: no
tasks:
- name: Install docker-compose python package
ansible.builtin.pip:
name: docker-compose
Reference: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/pip_module.html
To add more context to Thermostat's answer
I was using pip3 and not pip with the following:
Ubuntu 20.04
Ansible 2.10.7
Python 3.8.10
pip 20.0.2 (pip3)
Here's how I fixed mine:
So first I ran the command to remove all existing copies of docker, docker-py and docker-compose python libraries:
pip3 uninstall docker docker-py docker-compose
And then ran the command below to install the python docker-compose library alongside the python docker library
pip3 install docker-compose
That's all.

Ansible installed from source - what should my library setting be?

I have installed Ansible from source as per the instructions at:
http://docs.ansible.com/ansible/intro_installation.html
However when I try to use any command other than script, I get the following error:
fatal: [...]: FAILED! => {"failed": true, "msg": "ERROR! The module get_url was not found in configured module paths"}
If the source Ansible directory is /home/cloud/ansible, and I have done a make install, what should I set the library path setting to in ansible.cfg?
As #udondan says, make sure you used:
git clone https://github.com/ansible/ansible --recursive
to clone the Ansible repo, and then run:
cd ./ansible
source ./hacking/env-setup
You don't need to run make install.
The machine that has Ansible running on it needs some other Python modules too, they are listed at the bottom of the http://docs.ansible.com/ansible/intro_installation.html#running-from-source section. Best to install the pip Python package manager with:
sudo easy_install pip
and then install the required packages:
sudo pip install paramiko PyYAML Jinja2 httplib2 six

Why doesn't gem install fpm via ansible?

I have created an EC2 instance on Amazon Cloud and I am installing some stuffs via ansible too. But when it installs fpm using gem:
- name: install fpm
gem: name=fpm state=latest
sudo: yes
it says:
changed: [XX.XX.XXX.XXX] => {"changed": true, "name": "fpm", "state": "latest", "version": "1.3.3"}
No errors. But when I enter the instance and try to run a script it says:
fpm is mandatory, please run gem install fpm
If I do sudo gem install fpm in console of the EC2, the script runs as espected.
So what am I doing wrong? Doesn't Ansible install the fpm?
I have fixed the problem by doing
- name: install fpm
command: bash -lc "gem install fpm"
instead of
- name: install fpm
gem: name=fpm state=latest
sudo: yes
Now It does not ask for fpm anymore, it is installed. But why gem does not work?

ansible-galaxy role fails with "do not have permission to modify /etc/ansible/roles/"

tl;dr = How do OS X users recommend working around this permissions error?
I'm on OS X 10.10.1 and I recently installed Ansible running the following:
sudo pip install ansible --quiet
sudo pip install ansible --upgrade
I want to start off with a galaxy role to install homebrew and went to run this one with the following error:
$ ansible-galaxy install geerlingguy.homebrew
- downloading role 'homebrew', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-homebrew/archive/1.0.1.tar.gz
- extracting geerlingguy.homebrew to /etc/ansible/roles/geerlingguy.homebrew
- error: you do not have permission to modify files in /etc/ansible/roles/geerlingguy.homebrew
- geerlingguy.homebrew was NOT installed successfully.
- you can use --ignore-errors to skip failed roles.
While I see /etc is owned by root, I don't see any notes in documentation saying I should chmod anything.
For reference:
$ ansible --version
ansible 1.8.2
configured module search path = None
Is this expected or is my installation somehow wrong?
The default location for roles is /etc/ansible/roles (for version <= 2.3. Since v2.4, the default location has changed to ~/.ansible/roles/, an issue has been raised). You need to specify --roles-path when using ansible-galaxy. Here's what ansible-galaxy install --help says:
-p ROLES_PATH, --roles-path=ROLES_PATH
The path to the directory containing your roles. The
default is the roles_path configured in your
ansible.cfg file (/etc/ansible/roles if not
configured)
You can also set roles_path in ansible.cfg; see the documentation for details.
Or you can use brew to install ansible. To do it you would need to run:
brew install ansible
If you had any previous installations, it is possible that you will see a message like this:
Error: The brew link step did not complete successfully The formula
built, but is not symlinked into /usr/local Could not symlink
bin/ansible Target /usr/local/bin/ansible already exists. You may want
to remove it: rm '/usr/local/bin/ansible'
To force the link and overwrite all conflicting files: brew link
--overwrite ansible
To list all files that would be deleted: brew link --overwrite
--dry-run ansible
Possible conflicting files are: /usr/local/bin/ansible
/usr/local/bin/ansible-console /usr/local/bin/ansible-doc
/usr/local/bin/ansible-galaxy /usr/local/bin/ansible-playbook
/usr/local/bin/ansible-pull /usr/local/bin/ansible-vault
So, run brew link --overwrite ansible to fix that. And now you will be able to install any roles without sudo.
Example:
ยป ansible-galaxy install bennojoy.redis
- downloading role 'redis', owned by bennojoy
- downloading role from https://github.com/bennojoy/redis/archive/master.tar.gz
- extracting bennojoy.redis to /usr/local/etc/ansible/roles/bennojoy.redis
- bennojoy.redis was installed successfully
As I saw you used "sudo" to install Ansible, I suppose it shall be OK to continue using "sudo" for ansible-galaxy installation. And that's what I just did.

Heroku command not found

After installing Heroku Toolbelt, in terminal on Mac when trying to run the following command:
heroku
I get the error:
bash: heroku: command not found
When I do:
gem environment
I get:
- RUBYGEMS VERSION: 1.3.6
- RUBY VERSION: 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin11.0]
- INSTALLATION DIRECTORY: /Library/Ruby/Gems/1.8
- RUBY EXECUTABLE: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
- EXECUTABLE DIRECTORY: /usr/bin
- RUBYGEMS PLATFORMS:
- ruby
- universal-darwin-11
- GEM PATHS:
- /Library/Ruby/Gems/1.8
- /Users/Bart/.gem/ruby/1.8
- /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8
- GEM CONFIGURATION:
- :update_sources => true
- :verbose => true
- :benchmark => false
- :backtrace => false
- :bulk_threshold => 1000
- REMOTE SOURCES:
- http://rubygems.org/
I've tried adding several paths to $PATH, but nothing works...
Manually adding the symlink after installing Toolbelt fixed it for me.
sudo ln -s /usr/local/heroku/bin/heroku /usr/bin/heroku
(This answer is for typical other persons, that may land here, and that may find it useful)
If you come to install heroku snap using snap command through the command line as follow
sudo snap install heroku --classic (the thing you will find in the heroku doc).
And that after installation the heroku command isn't available. Then here the solution and the why:
First know that when you install a new snap, it get added to /snap folder. A new folder with the snap name is created (/snap/heroku), and the executable file for the command is added to /snap/bin (/snap/bin/heroku).
Try
/snap/bin/heroku help
and you will find it work very well.
Solution: So you have just to add /snap/bin to your PATH environement variable.
Heroku is supposing that it's already done. I don't know, if that should have been done automatically at the installation of snapd package. But any way, that's it.
For how to add new paths to the PATH environment variable look at the links bellow, to get a good idea (case you don't know that already):
https://stackoverflow.com/a/26962251/7668448
https://askubuntu.com/questions/866161/setting-path-variable-in-etc-environment-vs-profile
https://www.computerhope.com/issues/ch001647.htm
https://hackprogramming.com/2-ways-to-permanently-set-path-variable-in-ubuntu/
http://www.troubleshooters.com/linux/prepostpath.htm
https://serverfault.com/questions/166383/how-set-path-for-all-users-in-debian
Here links about why you need to logout and login back or reboot
Setting environment variable globally without restarting Ubuntu
https://superuser.com/questions/339617/how-to-reload-etc-environment-without-rebooting
Here an example:
sudo nano /etc/environment
i chose to add the path through /etc/environment (remember you can't use shell commands).
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/node-v9.6.1-linux-x64/bin:/snap/bin
You can see i add it at the end (that simple).
Reboot your computer or logout and login back (PAM script handle the construction of the PATH from /etc/environment at session creation time)
If You want to have the effect take place right away, execute:
source /etc/environment && export PATH
(it affect only the current opened shell and the children processes)
Here another example doing it in /etc/profile:
if [ "`id -u`" -eq 0 ]; then
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
else
PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games"
fi
PATH="$PATH:/snap/bin"
export PATH
I just added one line (the one before the last, and note that a portion from the whole file (/etc/profile)).
Reboot or logout and login back.
Execute :
source /etc/profile
to be operational right away (affect the current shell and the children processes).
There is different ways to add to PATH, even an infinity of ways if we give our imagination a go. The difference between the ways is about when it get set, and executed, and what scope it reach. As also organization aspect (i can have my own text list (one path per line), and have it compiled and executed in the right manner and place for example). Better see the links above, i put a good selection out there, to get a better understanding about how things work, and what method to choose. But generally the two above for a system wide configuration, are mostly what you need.
Do remember to actually source the installation file.
wget -0- wget https://toolbelt.heroku.com/install-ubuntu.sh | sh
didn't work for me. And as a linux noob I used instead:
wget 0- wget https://toolbelt.heroku.com/install-ubuntu.sh | sh
notice that the '-' is missing from the option to wget. This downloaded the install source to my current directory.
then I did:
bash install-ubuntu.sh
which finished up the installation for me.
then:
heroku login
works!!
Just run
$ gem install heroku
Form your app that's it.
I am using zsh which didn't have snap in its path. So just add this in ~/.zshrc.
export PATH=$PATH:/snap/bin
try npm install -g heroku for any platform.
Ran gem install heroku first and it gave me the following message:
heroku must be installed from cli.heroku.com. This gem is no longer available. (RuntimeError)
Steps from Heroku:
brew tap heroku/brew && brew install heroku
or Ubuntu
sudo snap install --classic heroku
when you install heroku in linux as per the documentation using
sudo snap install heroku --classic
it will install heroku inside /snap/bin/heroku
but when you type the command in terminal it will look into /usr/bin/ directory,
a simple solution is to create a symlink by
sudo ln -s /snap/bin/heroku /usr/bin/heroku
after that you can just run the heroku command in terminal.
First install heroku:
wget -qO- https://toolbelt.heroku.com/install.sh | bash
After that add a symlink to binary like #Garrett did:
sudo ln -s /usr/local/heroku/bin/heroku /usr/bin/heroku
Export snap Directory
export PATH=$PATH:/snap/bin
For yarn
If you want to deploy your backend or server, go to backend or server folder, use -
yarn global add heroku
For deploying frontend or client, go to frontend or client folder and use the same cmd.
For npm
Go to the respective folder which you want to deploy and use npm i -g heroku
After installing Heroku Toolbelt using the .pkg file I downloaded from Heroku's Getting Started with Rails 4.x on Heroku page, I got the heroku command not found message. My /usr/local/heroku/bin folder did exist.
I was able to resolve this issue by going to https://toolbelt.heroku.com and downloading the same .pkg file from that site and re-installing it. Note, I did not uninstall the previous package first.
After you run wget -0- wget https://toolbelt.heroku.com/install-ubuntu.sh | sh you might get the following warning:
WARNING: The following packages cannot be authenticated!
heroku heroku-toolbelt
If this happens, run this apt-get install -y --force-yes heroku-toolbelt
I've run all the commands with sudo, but I don't know if it makes a difference. Thanks to this answer
Brew install did not work in macOS?
For me brew tap heroku/brew && brew install heroku did not work in macOS.
So I tried the standalone download.
Here is the command which worked for me
curl https://cli-assets.heroku.com/install.sh | sh

Resources