Installing composer programatically working on vagrant but not on EC2 instance - amazon-ec2

I'm working on Ansible playbook.
When provisioning on a vagrant machine, it goes well, without errors.
Right now I'm on a trouble on the step where composer is installed programatically.
install-composer.sh (This script has been taken from Composer Page)
#!/bin/sh
EXPECTED_SIGNATURE="$(wget -q -O - https://composer.github.io/installer.sig)"
echo $EXPECTED_SIGNATURE;
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
ACTUAL_SIGNATURE="$(php -r "echo hash_file('sha384', 'composer-setup.php');")"
echo $ACTUAL_SIGNATURE; <- Here is empty!
if [ "$EXPECTED_SIGNATURE" != "$ACTUAL_SIGNATURE" ]
then
>&2 echo 'ERROR: Invalid installer signature'
rm composer-setup.php
exit 1
fi
php composer-setup.php --quiet
RESULT=$?
rm composer-setup.php
exit $RESULT
The EC2 is a Ubuntu 16.04 instance, on vagrant as well.
Playbook task where i'm getting the error:
- name: Download Composer
script: scripts/install_composer.sh
register: composer_setup
#when: not composer_stat.stat.exists
tags:
- deploy
And the full error with --verbose:
TASK [Download Composer] *******************************************************
fatal: [18.203.185.87]: FAILED! => {"changed": true, "failed": true,
"rc": 1, "stderr": "Shared connection to 18.203.185.87 closed.\r\n",
"stdout":
"a5c698ffe4b8e849a443b120cd5ba38043260d5c4023dbf93e1558871f1f07f58274fc6f4c93bcfd858c6bd0775cd8d1\r\n/home/ubuntu/.ansible/tmp/ansible-tmp-1566333565.4-174304924088429/install_composer.sh:
5:
/home/ubuntu/.ansible/tmp/ansible-tmp-1566333565.4-174304924088429/install_composer.sh:
php: not
found\r\n/home/ubuntu/.ansible/tmp/ansible-tmp-1566333565.4-174304924088429/install_composer.sh:
6:
/home/ubuntu/.ansible/tmp/ansible-tmp-1566333565.4-174304924088429/install_composer.sh:
php: not found\r\n\r\nERROR: Invalid installer signature\r\nrm: cannot
remove 'composer-setup.php': No such file or directory\r\n",
"stdout_lines":
["a5c698ffe4b8e849a443b120cd5ba38043260d5c4023dbf93e1558871f1f07f58274fc6f4c93bcfd858c6bd0775cd8d1",
"/home/ubuntu/.ansible/tmp/ansible-tmp-1566333565.4-174304924088429/install_composer.sh:
5:
/home/ubuntu/.ansible/tmp/ansible-tmp-1566333565.4-174304924088429/install_composer.sh:
php: not found",
"/home/ubuntu/.ansible/tmp/ansible-tmp-1566333565.4-174304924088429/install_composer.sh:
6:
/home/ubuntu/.ansible/tmp/ansible-tmp-1566333565.4-174304924088429/install_composer.sh:
php: not found", "", "ERROR: Invalid installer signature", "rm: cannot
remove 'composer-setup.php': No such file or directory"]} changed:
[192.168.33.10] => {"changed": true, "rc": 0, "stderr": "Shared
connection to 192.168.33.10 closed.\r\n", "stdout":
"a5c698ffe4b8e849a443b120cd5ba38043260d5c4023dbf93e1558871f1f07f58274fc6f4c93bcfd858c6bd0775cd8d1\r\na5c698ffe4b8e849a443b120cd5ba38043260d5c4023dbf93e1558871f1f07f58274fc6f4c93bcfd858c6bd0775cd8d1\r\n",
"stdout_lines":
["a5c698ffe4b8e849a443b120cd5ba38043260d5c4023dbf93e1558871f1f07f58274fc6f4c93bcfd858c6bd0775cd8d1",
"a5c698ffe4b8e849a443b120cd5ba38043260d5c4023dbf93e1558871f1f07f58274fc6f4c93bcfd858c6bd0775cd8d1"]}
Any idea why this line is returning empty on ec2?
"$(php -r "echo hash_file('sha384', 'composer-setup.php');")"
Thanks!

My first guess is that your EC2 instance doesn't have access to the internet. Can you verify this? The verbose logs indicate that the composer-setup.php file does not exist when you're trying to hash it. You should try downloading that file using the current copy() method, then doing something like:
TESTSTUFF="$(php -r "echo file_exists('composer-setup.php') ? 'FILE EXISTS' : 'FILE DOES NOT EXIST';")"
echo $TESTSTUFF
If the file does not exist, try to download a different file from somewhere, such as a dummy test file from here.
All of the signs point to the file not existing, which can caused by an array of issues. But, the most likely are:
No internet access on EC2 (Can you ping 8.8.8.8?)
Improper permissions to execute php
Improper permissions to write to the destination filesystem

After Trying what Aaron Said, I saw that php was not correctly installed(not installed at all)
I was executing:
ansible-playbook ansible/playbook.yml -i ansible/hosts.ini -t deploy --ask-vault-pass --verbose
Note the tag deploy, so in here it is suposed that php was installed. All had to do was remove -t deploy from my command!

Related

Run mysqlsh commands in Ansible playbook

I would like to create a mysql innodb cluster. SO I want to run the mysql shell commands in ansible playbook. But getting memory error. Below are the code and error-
tasks:
- name: get cluster status from dbboxes
shell: mysqlsh test123:test123#box1:3306 -e "createCluster('test_cluster')"
Execution error as below-
The full traceback is:
Traceback (most recent call last):  File
"/tmp/ansible_command_payload__lbu2_tp/ansible_command_payload.zip/ansible/module_utils/basic.py", line 2724, in run_command    stdout += b_chunk
MemoryError
fatal: [localhost]: FAILED! => {    "changed": false,    "cmd": "/u01/mysql/8.0/bin/mysqlsh --uri 'test123:********#box1:3306' -e 'var cluster=dba.createCluster('\"'\"'test_cluster'\"'\"')'",    "invocation": {        "module_args": {            "_raw_params": "/u01/mysql/8.0/bin/mysqlsh --uri test123:test123#box1:3306 -e \"var cluster=dba.createCluster('test_cluster')\"",            "_uses_shell": true,            "argv": null,            "chdir": null,            "creates": null,            "executable": null,            "removes": null,            "stdin": null,            "stdin_add_newline": true,            "strip_empty_ends": true,            "warn": true        }    },    "msg": "",    "rc": 257
}
Kindly suggest any better approach for this

fail2ban "command not found" when executing banaction

One of the actions for fail2ban is configured to run a ruby script; however, fail2ban fails when trying to execute the ruby script with a "Command not found" error. I don't understand this error because I'm providing the full path to the ruby script and it has execution permissions:
Here's my fail2ban action:
[root:a17924e746f0:~]# cat /etc/fail2ban/action.d/404.conf
# Fail2Ban action configuration file for Subzero/Core
[Definition]
actionstart =
actionstop =
actioncheck =
actionban = /root/ban_modify.rb ban <ip>
actionunban = /root/ban_modify.rb unban <ip>
Here are the contents to the /root/ban_modify.rb script:
#!/usr/bin/env ruby
command = ARGV[0]
ip_address = ARGV[1]
blacklist = File.open("/root/blacklist.txt").read.split("\n")
if command == "unban"
if blacklist.include? "#{ip_address} deny"
blacklist.delete "#{ip_address} deny"
end
elsif command == "ban"
blacklist << "#{ip_address} deny"
end
File.open("/root/blacklist.txt", "w") {|f| f.write(blacklist.join("\n"))}
Very simple. This blacklist.txt file is used by Apache to permanently ban individuals from the web server when a fail2ban condition is met.
However, when I issue the following command: sudo /usr/bin/fail2ban-client set 404 unbanip <my ip>
I get the following error:
2019-08-19 20:56:43,508 fail2ban.utils [16176]: Level 39 7ff7395873f0 -- exec: ban_modify.rb ban <myip>
2019-08-19 20:56:43,509 fail2ban.utils [16176]: ERROR 7ff7395873f0 -- stderr: '/bin/sh: 1: ban_modify.rb: not found'
2019-08-19 20:56:43,509 fail2ban.utils [16176]: ERROR 7ff7395873f0 -- returned 127
2019-08-19 20:56:43,509 fail2ban.utils [16176]: INFO HINT on 127: "Command not found". Make sure that all commands in 'ban_modify.rb ban <myip>' are in the PATH of fail2ban-server process (grep -a PATH= /proc/`pidof -x fail2ban-server`/environ). You may want to start "fail2ban-server -f" separately, initiate it with "fail2ban-client reload" in another shell session and observe if additional informative error messages appear in the terminals.
2019-08-19 20:56:43,509 fail2ban.actions [16176]: ERROR Failed to execute ban jail '404' action '404' info 'ActionInfo({'ip': '<myip>', 'family': 'inet4', 'ip-rev': '<myip>.', 'ip-host': '<myip>', 'fid': '<myip>', 'failures': 1, 'time': 1566266203.3465006, 'matches': '', 'restored': 0, 'F-*': {'matches': [], 'failures': 1}, 'ipmatches': '', 'ipjailmatches': '', 'ipfailures': 1, 'ipjailfailures': 1})': Error banning <myip>
I'm not sure why this error is happening if the actionban is pointing to the full path of a ruby script.
I even tried changing the contents of /root/ban_modify.rb to just simply puts "Hello World". Tried changing the banaction to iptables-allports and that still failed. It seems like banaction just simply doesn't work.
You can enable fail2ban debug mode & check fail2ban log for more details.
# change fail2ban log level
sudo nano /etc/fail2ban/fail2ban.conf
loglevel = DEBUG
# restart fail2ban
sudo systemctl restart fail2ban
# check logs
tail -f /var/log/fail2ban.log
You can restart the fail2ban and check it again:
sudo systemctl restart fail2ban

Ansible apt module - "unsupported parameter for module: “name"

Ansible version 2.1.2.0 (homebrew, macOS - having removed any previous versions)
ansible myserver -m apt -a “name=backup2l,state=present” --ask-pass
returns this error:
myserver | FAILED! => {
"changed": false,
"failed": true,
"msg": "unsupported parameter for module: “name"
}
This seems the correct syntax according to the examples:
# Install the package "foo"
- apt: name=foo state=present
I've tried wrapping the values for name and state in single quotes, also using a space between the parameters (it doesn't like that – "ERROR! Missing target hosts").
Any ideas?
ansible arguments are seperated by spaces like a command line, not a function call.
Try:
ansible myserver -m apt -a “name=backup2l state=present” --ask-pass
Due to using “smart quotes” rather than "regular quotes", which was caused by typing a command out in my notes application first then copying and pasting into iTerm.
NB: the error message you get depends on what you're trying to do - if you're running a single command with -a, Ansible will say "No such file or directory".
Solution for Mac users:
System Preferences > Keyboard > Text > uncheck "Use smart quotes and dashes".
Vote for this if you like: iTerm2 feature request

Can ansible ad-hoc tolerate some hosts failures?

I know ansible playbooks can set max_fail_percentage to allow the playbook to progress if at least that percentage of the hosts succeeded. However, I wanted to run an ad-hoc command that succeded (exit status 0) if at least a percentage of the hosts executed without errors. Is it possible?
If you have a playbook that affects say 10 hosts and at some point during execution it fails on 1 host, Ansible will simply continue (if you don't set max_fail_percentage at all) on all other hosts. This is default behaviour, generally playbooks will stop executing any more steps on a host that has a failure.
This is mentioned also in Ansible docs: Ansible - max_failure_percentage
This behaviour is exactly the same for ad hoc commands.
Test, test, test...
EDIT:
Just Ansible will not do this, however you can override exit status by piping Ansible's output to for example perl one-liner and exit with a different code there, it's quite ugly but works :)
See example below, it exits with 0 only if > 65% of hosts succeeded, otherwise exit code is 2.
In order to catch failures and parse them somehow you need to redirect STDERR to STDOUT from ansible command (thus 2>&1 at the end of the Ansible command, Perl will not see it otherwise)
$ ansible all -i provisioning/vagrant-inventory -u vagrant --private-key=~/.vagrant.d/insecure_private_key -m ping 2>&1 | perl -pe 'BEGIN { $failed=0; $success=0;} END { $exit_code=( $success/($success+$failed) ) > 0.65 ? 0 : 2; exit $exit_code;} $failed++ if /\| FAILED/i; $success++ if /\| success/i;'
192.168.111.210 | success >> {
"changed": false,
"ping": "pong"
}
192.168.111.200 | success >> {
"changed": false,
"ping": "pong"
}
192.168.111.211 | FAILED => SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh
$ echo $?
0

How should script output be formatted for Ansible reporting?

I'm using Ansible v1.3 to run a bash script on a group of servers. I'm trying to get my output to work with what Ansible is expecting to format the output correctly but I'm missing something.
I've read somewhere (can't find the link!) that if script output is formatted as JSON, Ansible will pick it up and include it in the output.
So in the script, the very last thing I do is this:
cat <<EOF
{
"value" : $value
}
EOF
I call my script like this:
ansible target_hosts -m script -a script.sh
And the output I get is like this:
X.X.X.X | success >> {
"rc": 0,
"stderr": "",
"stdout": "value=96\r\n"
}
I'm expecting to see something like this:
X.X.X.X | success >> {
"rc": 0,
"stderr": "",
"stdout": "",
"value": "96"
}
What am I missing?
The problem is that you are running your module as a script. Create a library folder and put your script there. After that, you can run your script with:
ansible target_hosts -m script.sh
I case of doubt, take a look at: http://jpmens.net/2012/07/05/shell-scripts-as-ansible-modules/
Note: Don't forget to include a #!/bin/bash in the top of the file, or ansible will fail with a message like target_host | FAILED => module is missing interpreter line

Resources