Can Cloudinit be used to automate complex configuration such as UFW and Apache - bash

Cloudinit can handle basic configuration like creating users and groups, installing packages, mount storage points, and more (see Cloud Config Examples). But can it handle more complex tasks like the below, and if so, how? A minimal working example would be appreciated.
# KNOWN: Replicating the below user creation with sudo privelges and a home
# directory is possible through cloudinit
sudo adduser johnny
sudo usermod -aG sudo johnny
# KNOWN: Replicating the below public/private key creation is possible through
# cloudinit
ssh johny#10.0.0.1 "ssh-keygen -t rsa"
# UNKNOWN: Is it possible to update the firewall rules in cloudinit or should
# one simply SSH in afterwards like so
ssh johnny#10.0.0.1 "
sudo ufw enable
sudo ufw allow http
sudo ufw allow https"
# UNKNOWN: Is it possible to deploy LetsEncrypt cetrificates or should one
# simply SSH in afterwrds like so
ssh johnny#10.0.0.1 "
sudo service apache2 restart
sudo certbot --apache"
# UNKNOWN: Is it possible to clone and install git repositories or should one
# simply SSH in afterwards like so
ssh johnny#10.0.0.1 "
GIT_NAME=johnny
GIT_EMAIL=johnny.rico#citizen.federation
git confing --global user.name $GIT_NAME
git confing --global user.email $GIT_EMAIL
git clone git#github.com:Federation:clandathu.git
cd clandathu/install
make --kill-em-all
sudo make install"

If you're referring specifically to the cloud-config, then all of the unknowns that you have listed don't have specific modules for them. However, you can also run arbitrary shell scripts via the runcmd module, or by specifying a script as your user data instead of a cloud config. It just has to start with #! rather than #cloud-config. If you want both a cloud config and a custom shell script, you can build a mime multi part archive with a cloud-init helper command.

Related

Setting up Hadoop on a RHEL machine with a security policy

I have been playing around with a Hadoop installation on CentOS for a while but today when I shifted to RHEL I got pesky password prompts when trying to start the pseudo-distributed cluster. After hours of poking around I finally managed to get rid of them by removing the security policy I had selected during installation of RHEL.
Looks like some aspect of the security policy was not letting me set up password less SSH to allow the different servers to communicate.
Going forward I would like to be able to run a cluster on machines with security policy enabled. What are the changes that I need to make, or where should I start looking into, to get the right set of network configurations?
I got pesky password prompts when trying to start the pseudo-distributed cluster
That's a sign you did not correctly establish a passwordless SSH keypair. Perhaps you did type a password when you generated the key? Or you didn't add it correctly into the authorized keys file for an SSH session.
This should not prompt for a password
$ ssh localhost
And if it does, generate keys again without a password
$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
Also, RHEL systems need SELinux disabled. I believe Cloudera and Hortonworks install guides also have you turning the firewall off
If you want a secure cluster, you would install and configure MIT Kerberos or Active Directory

Laravel Envoy with Vagrant: Permission denied (publickey)

Trying to use Envoy via Vagrant (Homestead) to deploy to a server on EC2 that I would normally use a .PEM file when I SSH into it.
When using: #servers(['web' => 'ec2-user#myserver.com']) in my Envoy.blade.php
I get: Permission denied (publickey).
Any help would be huge!
Answer is here: https://stackoverflow.com/a/32088143/13346162
You need to pass the -A (as per the man page it - Enables forwarding of the authentication agent connection. This can also be specified on a per-host basis in a configuration file) in you ssh string.
You will also need add your ssh key for agent forwarding (on the machine which can access the git remote which I assume be your localhost)
ssh-add -K ~/.ssh/your_private_key
Something like this
#servers(['web' => '-A user#domain.com'])
#task('deploy')
cd /path/to/site
git status
#endtask
Git remote commands should now work.

Vagrant, Ansible, and SSH forwarding

I'm trying to use Vagrant and Ansible to create a developer VM environment. I'm able to connect just fine and install packages. My issue seems to be with ssh, git, and keyfiles. My setup is unfortunately rather complicated, and I don't have the ability to change that. The git repositories are hosted on a machine that I have to connect to via a bastion host with a keyfile.
My local ssh config file has all the necessary proxy commands to make this work. I have SSH forwarding my key, because I can log into the VM manually and use git. Via Ansible it doesn't seem to know about hosts that should be setup via the ssh config file.
I am not running the git clone as sudo, and I am using accept_hostkey. It just doesn't seem to know about the repository host at all.
I have also tried adding an ansible.cfg with the following command:
ssh_args = -o ControlPersist=15m -F ssh.config -q
The ssh.config file is the same as my ~/.ssh/config that happens to work when doing the git clones manually. I'm also doing this as the vagrant user manually, and I have remote_user set to vagrant in my playbook.
I'm just kind of stumped as to how this is supposed to work.
If I understand correctly, you can do it manually git clone into your vagrant machine?
If yes, then you can do like this, as you have already told us that the both machine has exactly the same ~/.ssh/config file, then you can do like this which I did during the git clone, when I got error:
- name: Pull sources from the repository.
git: repo='git#bitbucket.org:test/test.git' version=master dest=/var/www accept_hostkey=True force=yes recursive=no key_file=~/.ssh/id_rsa
Sometime, explicitly defined the key_file, accept_hostkey=True and force=yes solve the problem.
On the other hand, if you want to explicit define that always us the ssh connection instead of paramiko, then you can set into your ansible.cfg file, which is located at /etc/ansible/ansible.cfg
[defaults]
transport=ssh
There is another technique that I have read somewhere, you can also try that please to teach Ansible to talk to Git server on your behalf (again this change is in /etc/ansible/ansible.cfg)
[ssh_connection]
ssh_args = -o ForwardAgent=yes
Hope this will help you. Thanks
I'm not too familiar with Ansible but from docs, Ansible supports 2 ssh transports: OpenSSH, Paramiko (Python's SSH).
Unless you manually choose which one to use, it might choose Paramiko instead of OpenSSH.
This can explain the troubles you are having, since ssh_args is OpenSSH specific setting.
So the issue turned out to be that I was actually running one of my git clones as root after all.
For the SSH key to be forwarded properly in that case, you have to edit /etc/sudoers (with visudo) and update env_keep so that SSH_AUTH_SOCK is preserved.

use ssh private key from host in vagrant guest

I want to clone a bunch of private git repositories while provisioning a vagrant box. According to this article this should be possible using config.ssh.forward_agent = true. However, when trying to connect to github via something like ssh -T git#github.com -o StrictHostKeyChecking=no it fails with the following error:
Warning: Permanently added 'github.com,192.30.252.130' (RSA) to the list of known hosts.
Permission denied (publickey).
I cut my configuration down to the simplest possible configuration. You can find it here: https://gist.github.com/TomTasche/31f7c45fcffc2997d43a
When I do "vagrant ssh" and try the same again, a similar error occurs:
Cloning into 'private-repositories'...
Warning: Permanently added the RSA host key for IP address '192.30.252.130' to the list of known hosts.
Permission denied (publickey).
fatal: The remote end hung up unexpectedly
Edit: the configuration linked above does work on a host running Ubuntu, but does neither work on a Mac host, nor on a Windows host. My goal is to have a configuration that works on all these three hosts.
Please check whether your host system has ssh-agent forwarding enabled. You can do so for example by adding this block to your ~/.ssh/config file:
Host *
ForwardAgent yes
If this is enabled vagrant ssh (and also vagrant provision) should be able to forward your key to the guest machine.
You also might want to check using ssh-add -l whether your ssh-agent does know about your SSH-key. If it is in the list and you have agent-forwarding activated you should have a success. Otherwise you can add the key to your ssh-agent by running ssh-add <path to your key file>.
It sounds like you may be hitting this particular bug:
https://github.com/mitchellh/vagrant/issues/1735
(Despite it being "closed" it's actually not fixed)
On Windows, SSH Forwarding in Vagrant does not work properly by default (because of a bug in net-ssh).
However, there is a workaround or simple hack. You can auto-copy your local SSH key to the Vagrant VM via a simple provisioning script in your VagrantFile. Here's an example:
https://github.com/mitchellh/vagrant/issues/1735#issuecomment-25640783
Tom,
What you're doing is fairly generic in nature and I don't think is Vagrant specific.
Try some of the following to track down the issue:
edit your /etc/ssh/sshd_config
Set LogLevel debug
Restart the sshd service sudo service sshd restart or /etc/init.d/sshd restart
tail -f /var/log/authlog -- note, the file may be something else like /var/log/authd.log or /var/log/secure or something.
Watch what happens when you connect. It should give you some indication of why it's failing.
Again sorry, I'm not that familiar with Vagrant but I'm wondering if the provisioning script is running as another user, in which case the agent forwarding may not work as expected?

Cannot download Docker images behind a proxy

I installed Docker on my Ubuntu 13.10 (Saucy Salamander) and when I type in my console:
sudo docker pull busybox
I get the following error:
Pulling repository busybox
2014/04/16 09:37:07 Get https://index.docker.io/v1/repositories/busybox/images: dial tcp: lookup index.docker.io on 127.0.1.1:53: no answer from server
Docker version:
$ sudo docker version
Client version: 0.10.0
Client API version: 1.10
Go version (client): go1.2.1
Git commit (client): dc9c28f
Server version: 0.10.0
Server API version: 1.10
Git commit (server): dc9c28f
Go version (server): go1.2.1
Last stable version: 0.10.0
I am behind a proxy server with no authentication, and this is my /etc/apt/apt.conf file:
Acquire::http::proxy "http://192.168.1.1:3128/";
Acquire::https::proxy "https://192.168.1.1:3128/";
Acquire::ftp::proxy "ftp://192.168.1.1:3128/";
Acquire::socks::proxy "socks://192.168.1.1:3128/";
What am I doing wrong?
Here is a link to the official Docker documentation for proxy HTTP:
https://docs.docker.com/config/daemon/systemd/#httphttps-proxy
A quick outline:
First, create a systemd drop-in directory for the Docker service:
mkdir /etc/systemd/system/docker.service.d
Now create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY and HTTPS_PROXY environment variables:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Environment="HTTPS_PROXY=http://proxy.example.com:80/"
If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable:
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Environment="HTTPS_PROXY=http://proxy.example.com:80/"
Environment="NO_PROXY=localhost,127.0.0.0/8,docker-registry.somecorporation.com"
Flush changes:
$ sudo systemctl daemon-reload
Verify that the configuration has been loaded:
$ sudo systemctl show --property Environment docker
Environment=HTTP_PROXY=http://proxy.example.com:80/
Environment=HTTPS_PROXY=http://proxy.example.com:80/
Restart Docker:
$ sudo systemctl restart docker
Footnote regarding HTTP_PROXY vs. HTTPS_PROXY: for a long time, setting HTTP_PROXY alone has been good enough. But with version 20.10.8, Docker has moved on to Go 1.16, which changes the semantics of this variable:
https://golang.org/doc/go1.16#net/http
For https:// URLs, the proxy is now determined by the HTTPS_PROXY variable, with no fallback on HTTP_PROXY.
Your APT proxy settings are not related to Docker.
Docker uses the HTTP_PROXY environment variable, if present. For example:
sudo HTTP_PROXY=http://192.168.1.1:3128/ docker pull busybox
But instead, I suggest you have a look at your /etc/default/dockerconfiguration file: you should have a line to uncomment (and maybe adjust) to get your proxy settings applied automatically. Then restart the Docker server:
service docker restart
On CentOS the configuration file for Docker is at:
/etc/sysconfig/docker
Adding the below line helped me to get the Docker daemon working behind a proxy server:
HTTP_PROXY="http://<proxy_host>:<proxy_port>"
HTTPS_PROXY="http://<proxy_host>:<proxy_port>"
If you're using the new Docker for Mac (or Docker for Windows), just right-click the Docker tray icon and select Preferences (Windows: Settings), then go to Advanced, and under Proxies specify your proxy settings there. Click Apply and Restart and wait until Docker restarts.
On Ubuntu you need to set the http_proxy for the Docker daemon, not the client process. This is done in /etc/default/docker (see here).
To extend Arun's answer, for this to work in CentOS 7, I had to remove the "export" commands. So edit
/etc/sysconfig/docker
And add:
HTTP_PROXY="http://<proxy_host>:<proxy_port>"
HTTPS_PROXY="https://<proxy_host>:<proxy_port>"
http_proxy="${HTTP_PROXY}"
https_proxy="${HTTPS_PROXY}"
Then restart Docker:
sudo service docker restart
The source is this blog post.
Why a locally-bound proxy doesn't work
The Problem
If you're running a locally-bound proxy, e.g. listening on 127.0.0.1:8989, it WON'T WORK in Docker for Mac. From the Docker documentation:
I want to connect from a container to a service on the host
The Mac has a changing IP address (or none if you have no network access). Our current recommendation is to attach an unused IP to the lo0 interface on the Mac; for example: sudo ifconfig lo0 alias 10.200.10.1/24, and make sure that your service is listening on this address or 0.0.0.0 (ie not 127.0.0.1). Then containers can connect to this address.
The similar is for Docker server side. (To understand the server side and client side of Docker, try to run docker version.) And the server side runs on a virtualization layer which has its own localhost. Therefore, it won't connect to the proxy server on the localhost of the host OS.
The solution
So, if you're using a locally-bound proxy like me, basically you would have to do the following things to make it work with Docker for Mac:
Make your proxy server listen on 0.0.0.0 instead of 127.0.0.1. Caution: you'll need proper firewall configuration to prevent malicious access to it.
Add a loopback alias to the lo0 interface, e.g. 10.200.10.1/24:
sudo ifconfig lo0 alias 10.200.10.1/24
Set HTTP and/or HTTPS proxy to 10.200.10.1:8989 from Preferences in Docker tray menu (assume that the proxy server is listening on port 8989).
After that, test the proxy settings by running a command in a new container from an image which is not downloaded:
$ docker rmi -f hello-world
...
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest
...
Notice: the loopback alias set by ifconfig does not preserve after a reboot. To make it persistent is another topic. Please check this blog post in Japanese (Google Translate may help).
This is the fix that worked for me: Ubuntu, Docker version: 1.6.2
In the file /etc/default/docker, add the line:
export http_proxy='http://<host>:<port>'
Restart Docker
sudo service docker restart
To configure Docker to work with a proxy you need to add the HTTPS_PROXY / HTTP_PROXY environment variable to the Docker sysconfig file (/etc/sysconfig/docker).
Depending on if you use init.d or the services tool you need to add the "export" statement (due to Debian Bug report logs - #767441. Examples in /etc/default/docker are misleading regarding the supported syntax):
HTTPS_PROXY="https://<user>:<password>#<proxy-host>:<proxy-port>"
HTTP_PROXY="https://<user>:<password>#<proxy-host>:<proxy-port>"
export HTTP_PROXY="https://<user>:<password>#<proxy-host>:<proxy-port>"
export HTTPS_PROXY="https://<user>:<password>#<proxy-host>:<proxy-port>"
The Docker repository (Docker Hub) only supports HTTPS. To get Docker working with SSL intercepting proxies you have to add the proxy root certificate to the systems trust store.
For CentOS, copy the file to /etc/pki/ca-trust/source/anchors/ and update the CA trust store and restart the Docker service.
If your proxy uses NTLMv2 authentication - you need to use intermediate proxies like Cntlm to bridge the authentication. This blog post explains it in detail.
After installing Docker, do the following:
[mdesales#pppdc9prd1vq ~]$ sudo HTTP_PROXY=http://proxy02.ie.xyz.net:80 ./docker -d &
[2] 20880
Then, you can pull or do anything:
mdesales#pppdc9prd1vq ~]$ sudo docker pull base
2014/04/11 00:46:02 POST /v1.10/images/create?fromImage=base&tag=
[/var/lib/docker|aa088847] +job pull(base, )
Pulling repository base
b750fe79269d: Download complete
27cf78414709: Download complete
[/var/lib/docker|aa088847] -job pull(base, ) = OK (0)
In the new version of Docker, docker-engine, in a systemd based distribution, you should add the environment variable line to /lib/systemd/system/docker.service, as it is mentioned by others:
Environment="HTTP_PROXY=http://hostname_or_ip:port/"
As I am not allowed to comment yet:
For CentOS 7 I needed to activate the EnvironmentFile within "docker.service" like it is described here: Control and configure Docker with systemd.
Edit: I am adding my solution as stated out by Nilesh. I needed to open "/etc/systemd/system/docker.service" and I had to add within the section
[Service]
EnvironmentFile=-/etc/sysconfig/docker
Only then was the file "etc/sysconfig/docker" loaded on my system.
If using socks5 proxy, here is my test with Docker 17.03.1-ce with setting "all_proxy", and it worked:
# Set up socks5 proxy server
ssh sshUser#proxyServer -C -N -g -D \
proxyServerIp:9999 \
-o ExitOnForwardFailure=yes \
-o ServerAliveInterval=60
# Configure dockerd and restart.
# NOTICE: using "all_proxy"
mkdir -p /etc/systemd/system/docker.service.d
cat > /etc/systemd/system/docker.service.d/http-proxy.conf <<EOF
[Service]
Environment="all_proxy=socks5://proxyServerIp:9999"
Environment="NO_PROXY=localhost,127.0.0.1,private.docker.registry.com"
EOF
systemctl daemon-reload
systemctl restart docker
# Test whether can pull images
docker run -it --rm alpine:3.5
To solve the problem with curl in Docker build, I added the following inside the Dockerfile:
ENV http_proxy=http://infoprx2:8080
ENV https_proxy=http://infoprx2:8080
RUN apt-get update && apt-get install -y curl vim
Note that the ENV statement is BEFORE the RUN statement.
And in order to make the Docker daemon able to access the Internet (I use Kitematic with boot2docker), I added the following into /var/lib/boot2docker/profile:
export HTTP_PROXY=http://infoprx2:8080
export HTTPS_PROXY=http://infoprx2:8080
Then I restarted Docker with sudo /etc/init.d/docker restart.
The complete solution for Windows, to configure the proxy settings.
< user>:< password>#< proxy-host>:< proxy-port>
You can configure it directly by right-clicking on settings, in the Docker icon, and then Proxies.
There you can configure the proxy address, port, user name, and password.
In this format:
< user>:< password>#< proxy-host>:< proxy-port>
Example:
"geronimous:mypassword#192.168.44.55:8080"
Nothing more than this.
If you are on Ubuntu, you should execute this command:
export https_proxy=http://your_name:password#ip_proxy:port docker
And reload Docker with:
service docker.io restart
Or go to /etc/docker.io with nano...
If you're in Ubuntu, execute these commands to add your proxy.
sudo nano /etc/default/docker
And uncomment the lines that specifies
#export http_proxy = http://username:password#10.0.1.150:8050
And replace it with your appropriate proxy server and username.
Then restart Docker using:
service docker restart
Now you can run Docker commands behind proxy:
docker search ubuntu
Perhaps you need to set up lowercase variables. In my case, my /etc/systemd/system/docker.service.d/http-proxy.conf file looks like this:
[Service]
Environment="ftp_proxy=http://<user>:<password>#<proxy_ip>:<proxy_port>/"
Environment="http_proxy=http://<user>:<password>#<proxy_ip>:<proxy_port>/"
Environment="https_proxy=http://<user>:<password>#<proxy_ip>:<proxy_port>/"
Good luck! :)
I was also facing the same issue behind a firewall. Follow the below steps:
$ sudo vim /etc/systemd/system/docker.service.d/http_proxy.conf
[Service]
Environment="HTTP_PROXY=http://username:password#IP:port/"
Don’t use or remove the https_prxoy.conf file.
Reload and restart your Docker container:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
$ docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:2557*********************************8
Status: Downloaded newer image for hello-world:latest
Simply setting proxy environment variables did not help me in version 1.0.1... I had to update the /etc/default/docker.io file with the correct value for the "http_proxy" variable.
On Ubuntu 14.04 (Trusty Tahr) with Docker 1.9.1, I just uncommented the http_proxy line, updated the value and then restarted the Docker service.
export http_proxy="http://proxy.server.com:80"
and then
service docker restart
Remove proxy from environment variables
unset http_proxy
unset https_proxy
unset no_proxy
and then restart your docker
On RHEL6.6 only this works (note the use of export):
/etc/sysconfig/docker
export http_proxy="http://myproxy.example.com:8080"
export https_proxy="http://myproxy.example.com:8080"
NOTE: Both can use the http protocol.)
Thanks to https://crondev.com/running-docker-behind-proxy/
In my network, Ubuntu works behind a corporate ISA proxy server. And it requires authentication. I tried all the solutions mentioned above and nothing helped. What really helped was to write a proxy line in file /etc/systemd/system/docker.service.d/https-proxy.conf without a domain name.
Instead of
Environment="HTTP_PROXY=http://user#domain:password#proxy:8080"
or
Environment="HTTP_PROXY=http://domain\user:password#proxy:8080"
and some other replacement such as # -> %40 or \ -> \\ I tried to use
Environment="HTTP_PROXY=http://user:password#proxy:8080"
And it works now.
Try this:
sudo HTTP_PROXY=http://<IP address of proxy server:port> docker -d &
This doesn't exactly answer the question, but might help, especially if you don't want to deal with service files.
In case you are the one is hosting the image, one way is to convert the image as a tar archive instead, using something like the following at the server.
docker save <image-name> --output <archive-name>.tar
Simply download the archive and turn it back into an image.
docker load <archive-name>.tar
Have resolved the issue by following the below steps:
step 1: sudo systemctl start docker
step 2: sudo systemctl enable docker
(Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.)
step 3: sudo systemctl status docker
step 4: sudo mkdir -p /etc/systemd/system/docker.service.d
step 5: sudo vi /etc/systemd/system/docker.service.d/proxy.conf
Set proxy as below
[Service]
Environment="HTTP_PROXY=http://proxy.server.com:80"
Environment="HTTPS_PROXY=http://proxy.server.com:80"
Environment="NO_PROXY=.proxy.server.com,*.proxy.server.com,localhost,127.0.0.1,::1"
step 6: sudo systemctl daemon-reload
step 7: sudo systemctl restart docker.service
step 8: vi /etc/environment and source /etc/environment
http_proxy=http://proxy.server.com:80
https_proxy=http://proxy.server.com:80
ftp_proxy=http://proxy.server.com:80
no_proxy=127.0.0.1,10.0.0.0/8,3.0.0.0/8,localhost,*.abc.com
I had a problem like I needed to use proxy to use google's dns for project's dependency and for API request needed to communicate with a private server at the same time.
For RHEL7 I configured the system like this:
went to the directory /etc/sysconfig/docker
Environment=http_proxy="http://ip:port"
Environment=https_proxy="http://ip:port"
Environment=no_proxy="hostname"
then save the file and use the command :
sudo systemctl restart docker
after that configure your Dockerfile :
setup the environment structure first:
ENV http_proxy http://ip:port
ENV https_proxy http://ip:port
ENV no_proxy "hostname"
that's all! :)

Resources