How to override AWS Elastic Beanstalk default "Message of the day" - amazon-ec2

Elastic beanstalk is used to deploy a docker container in a ec2 instance. Hence the ec2 instance is controlled by elastic beanstalk.
Using the .ebextensions settings I can change the file /var/lib/update-motd/motd in my ec2 instance.
Using:
files:
/var/lib/update-motd/motd:
content: |
The Custom Message
# Menu
> app [open docker info]
> logs [Print logs from Rails app]
group: root
mode: "000644"
owner: root
But every day this gets wiped away and the EB deafault message is there!
How can I make sure my custom motd stays there??

I think one way to make this motd persistent is to have something in motd then remote the package update-motd.
create file .ebextension/000update-motd.config
files:
"/home/ec2-user/updatemotd.sh" :
mode: "000777"
owner: root
group: root
content: |
#!/bin/bash
yum erase -y update-motd ; unlink /etc/motd
yum install -y figlet
echo `{"Ref": "AWSEBEnvironmentName" }` | figlet -f standard > /etc/motd
commands:
updatemotd:
command: "/home/ec2-user/updatemotd.sh"

https://blog.eq8.eu/til/elasticbeanstalk-update-ssh-welcome-message-motd.html
create file .ebextensions/91_update_motd_welcome_message_after_ssh.config
add content
files:
"/tmp/20-custom-welcome-message":
mode: "000755"
owner: root
group: root
content: |
cat << EOF
THIS WILL BE YOUR WELCOME MESSAGE
EOF
commands:
80_tell_instance_to_regenerate_motd:
command: mv /tmp/20-custom-welcome-message /etc/update-motd.d/20-custom-welcome-message
99_tell_instance_to_regenerate_motd:
command: /usr/sbin/update-motd
this will add extra message after the ElasticBeanstalk msg
if you want to override the original message change the file name from 20-custom-welcome-message to 10eb-banner

Related

Ansible cannot unarchive using 'root' username

I have the classic unarchive example in my playbook as follows:
- name: Extract foo.tgz into /var/lib/foo
ansible.builtin.unarchive:
src: foo.tgz
dest: /tmp
I get the error:
"Commands \"gtar\" and \"tar\" not found. Command \"unzip\" not found."
I did have a look at the answer here. However in my case, the issue is specifically with the 'root' user. With the user john, the unarchive is successful.
The following tests are successful for both the root and john:
ansible all -i hosts -m command -a "which tar" -l hostname --user [root,john]
... results in
hostname | CHANGED | rc=0 >> /usr/local/bin/tar
... and successfully finds the 'tar' binary.
What might be the issue?

ansible: how to become a passwordless user

I'm trying to achieve the following with ansible
create a user without a password
adduser test <-- ok and works on linux machine and works with ansible
change to user test
su test <-- works on linux machine, but fails with ansible. I get
incorrect password message
copy a file from location1 to location2 as a test user and change a file content.
cp loc1/testfile.txt loc2/testfile.txt && echo "hello" > testfile.txt
---
- name: This is a hello-world example
hosts: all
tasks:
- name: create a passwordless test user
action: user name=test state=present
become: yes
become_user: root
- name: Create a file called '/tmp/testfile.txt' with the content 'hello' using test user.
copy:
content: hello
dest: /tmp/testfile.txt
owner: test
group: test
become_user: test
primary conditions:
at a moment of execution the file testfile.txt is already created on linux machine and has a group root and user root. I want to override the file and assign different user and group.
I've tried various combination, including
copy:
content: hello
dest: /tmp/testfile.txt
owner: test
group: test
become: yes
become_user: test
copy:
content: hello
dest: /tmp/testfile.txt
owner: test
group: test
become: yes
become_user: test
become_method: su
copy:
content: hello
dest: /tmp/testfile.txt
owner: test
group: test
become: yes
copy:
content: hello
dest: /tmp/testfile.txt
owner: test
group: test
become_user: test
become_method: su
always getting a message about the password being incorrect. The awkward moment is that test user has no password
What am I doing wrong?
Updates:
Tried this
How to achieve sudo su - <user> and run all command in ansible <-- does not work
Found an answer - it is not possible
https://devops.stackexchange.com/questions/3588/how-do-you-simulate-sudo-su-user-in-ansible
What is the point?
to cite from Quora (source: https://www.quora.com/What-is-advantage-of-creating-passwordless-user-in-Linux)
I presume you mean processes such as a webserver, running as the
"apache" user with a locked password (shadow entry of '!!').
This is for security, in case a vulnerability is discovered in the
server code. Prior to the year 2000 or so, it was common for servers
to run as the root user, particularly as this privilege is required to
open network sockets on privileged ports (below 1024), such as 53
(DNS) or 80 (HTTP). As I recall, high-profile breaches of the bind and
sendmail servers caused developers to re-think this strategy. Since
then, services are started with root privilege, the socket opened, and
then privilege is dropped to a non-privileged user ID such as "apache"
or "named". This needs no password, since it is never intended that
anyone login. Rather, a process running as root executes a setuid()
system call to change effective user ID to this user. In the event of
a security breach, an attacker will be limited to the access lists of
this user; for instance, a vulnerable CGI script on a webserver would
be able to access the /tmp directory as the "apache" user, but be
unable to read /etc/shadow for instance, or to write an extra user
into /etc/passwd or modify system binaries in /sbin.
To avoid what is described in "password not being accepted for sudo user with ansible":
fatal: [testserver]: FAILED! => {"failed": true, "msg": "Incorrect su password"}
You might try using sudo, assuming you have given test user sudo rights:
# Debian systems (Ubuntu / Linux Mint / ElementryOS), add users to the sudo group
sudo usermod -aG sudo username
# On RHEL based systems (Fedora / CentOS), add users to the wheel group
sudo usermod -aG wheel username
Then:
become_user: test
become_method: sudo
Laucnhed with:
ansible-playbook -i inventory simple_playbook.yml --ask-become-pass
And enter the root password

Galaxy role. The module file was not found in configured module paths. Additionally, core modules are missing

This is from a galaxy role (ashwin_sid.gaia_fw1) that I'm trying to implement.
Ansible version is 2.8.4
As part of the playbook it logs in, runs a show command. The output is then supposed to go to "BACKUP", but it throws this error: "The module file was not found in configured module paths. Additionally, core modules are missing".
This is the playbook:
serial: 1
gather_facts: no
tasks:
- name: BACKUP
import_role:
name: ashwin_sid.gaia_fw1
tasks_from: backup'
I think this where it breaks, where it references this file:
'- name: create dir
local_action: file path=={{ logdir | default('../BACKUP') }}/{{ r0.stdout }} state=directory'
This is the task with the error in verbose mode.
TASK [ashwin_sid.gaia_fw1 : create dir] ****************************************************************************************************************************************************************
task path: /app/sandbox/playbooks/ashwin_sid.gaia_fw1/tasks/backup.yml:23
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: xxxxx
<localhost> EXEC /bin/sh -c 'echo ~xxxxx && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/xxxxx/.ansible/tmp/ansible-tmp-1569528903.45-71335581192935 `" && echo ansible-tmp-1569528903.45-71335581192935="` echo /home/xxxxx/.ansible/tmp/ansible-tmp-1569528903.45-71335581192935 `" ) && sleep 0'
fatal: [lab_B]: FAILED! => {
"msg": "The module file was not found in configured module paths. Additionally, core modules are missing. If this is a checkout, run 'git pull --rebase' to correct this problem."
}
I'm not sure what other information to provide?
I've created the "BACKUP" directory. I don't think it's a permissions issue. It logs in fine and I think it runs the command it just can't write?
You have an extra space in your playbook:
"local_action: file path=={{"
should be :
"local_action: file path=={{
The error shows an extra space after stating module not found:
'"msg": "The module file was not found...'
After removing that space, it should work for you.

Ansible cannot copy a file. Probably permissions

I want to copy a new certificate to Proxmox with Ansible.
My setup
.ssh/config is modified so ssh machine will log in with root.
scp /Users/dir/key.pem /etc/pve/nodes/machine/pve-ssl.key works fine.
Problem
Ansible fails. I'm running this on an up-to-date macbook. ansible --version is ansible 2.2.1.0.
machine.yml
- hosts: machines
vars:
ca_dir: /Users/dir/
- name: copy a pve-ssl.key
copy:
src="{{ ca_dir }}/key.pem"
dest=/etc/pve/nodes/machine/pve-ssl.key
Permissions?
This works fine:
- hosts: machines
vars:
ca_dir: /Users/dir/
- name: copy a pve-ssl.key
copy:
src="{{ ca_dir }}/key.pem"
dest=/root/pve-ssl.key
So it's a permissions problem, but why. Ansible is entering my machine with root - ansible machine -m shell -a 'who'.
Probably something to do with group permissions, since
$ ls -la /etc/pve/nodes/machine/
drwxr-xr-x 2 root www-data 0 Feb 26 01:35 .
[...]
$ ls -la /root
drwx------ 5 root root 4096 Feb 26 12:09 .
[...]
How can I copy the file with ansible?
If the question is "what is the problem?" then the answer is:
It's because of the /dev/fuse filesystem mounted on /etc/pve (Ansible just cannot move the file from /tmp to the branch of /etc/pve, just like a simple mv /tmp/file /etc/pve command fails).
If the question is "how to deal with the problem?" then:
Copy the files elsewhere (/home/user) with Ansible and then copy the files using the command module on Proxmox and delete the originals.
You could also first touch the file and then copy it:
- name: touch empty file
file:
path: /etc/pve/nodes/machine/pve-ssl.key
state: touch
- name: copy a pve-ssl.key
copy:
src: "{{ ca_dir }}/key.pem"
dest: /etc/pve/nodes/machine/pve-ssl.key

How to replace a directory with a symlink using ansible?

I would like to replace /etc/nginx/sites-enabled with a symlink to my repo. I'm trying to do this using file module, but that doesn't work as the file module doesn't remove a directory with force option.
- name: setup nginx sites-available symlink
file: path=/etc/nginx/sites-available src=/repo/etc/nginx/sites-available state=link force=yes
notify: restart nginx
I could fall back to using shell.
- name: setup nginx sites-available symlink
shell: test -d /etc/nginx/sites-available && rm -r /etc/nginx/sites-available && ln -sT /repo/etc/nginx/sites-available /etc/nginx/sites-available
notify: restart nginx
Is there any better way to achieve this instead of falling back to shell?
When you take your action, it's actually things:
delete a folder
add a symlink in its place
This is probably also the cleanest way to represent in Ansible:
tasks:
- name: remove the folder
file: path=/etc/nginx/sites-available state=absent
- name: setup nginx sites-available symlink
file: path=/etc/nginx/sites-available
src=/repo/etc/nginx/sites-available
state=link
force=yes
notify: restart nginx
But, always removing and adding the symlink is not so nice, so adding a task to check the link target might be a nice addition:
- name: check the current symlink
stat: path=/etc/nginx/sites-available
register: sites_available
And a 'when' condition to the delete task:
- name: remove the folder (only if it is a folder)
file: path=/etc/nginx/sites-available state=absent
when: sites_available.stat.isdir is defined and sites_available.stat.isdir

Resources