How to setup local development environment for Nomad+Consul Service Mesh - consul

As per Hashicorp documentation on Nomad+Consul, consul service mesh cannot be run on MacOS/Windows, since it does not support bridge network.
https://www.nomadproject.io/docs/integrations/consul-connect
What is the recommended way to setup a local development environment for Nomad+Consul?

I'd suggest to have a look at setting up your local environment using Vagrant (which is also a product for Hashicorp) and Virtual box. There are plenty examples online, for example
Here is one of the most recent setup with Nomad and Consul, although it is not parametrised much.
Here is one with the core Hashicorp stack, i.e. Nomad, Vault and Consul. This repo is quite old but it merely means that it uses old versions of binaries, which should be easy to update.
Here is one with only Vault and Consul, but you can add Nomad in a similar way. In fact, this Vargrant setup and how files are structured seems to me pretty close to the one above
I've run the first two previous week with a simple
vagrant up
and it worked almost like a charm. I think, I needed to upgrade my VirtualBox and maybe run vagrant up multiple times because of some weird run time errors which I didn't want to debug)
Once Vagrant finishes build you can
vagrant ssh
to get inside created VM, although configs are setup with mounting volumes/syncing files and all UI components are also exposed at the default ports.

Related

What is the proper way to remove a previously deployed service from kolla-ansible

I have a recently deployed kolla-ansible stable/victoria with several services I wanted to try but no longer need (designate, octavia, etc.) What is the "right" way to remove these services? I have attempted:
kolla-ansible -i multinode reconfigure --tags <services>
kolla-ansible -i multinode reconfigure --tags common,haproxy,<services>
kolla-ansible -i multinode deploy --tags <services>
In each case I'm left with still-running containers, leftover configuration artifacts (/etc/kolla/.*.conf) and haproxy config files.
I know it's been a while since you posted this question, but I recently had the same problem and haven't found documentation about this anywhere.
The reason why reconfigure and deploy don't do anything even if you set enable_<service> to no is because the Ansible playbooks only run tasks involving a given service if its corresponding enable is true. If you look at the output of your commands run with --tags, you'd see that Ansible isn't really doing anything with regards to your disabled service.
Since Kolla-Ansible deploys everything with containers, I've found most services can simply be removed by doing the following:
Stop and delete all the containers running the service to be removed
Delete those containers' volumes
Remove the configuration and log files (under /etc/kolla and /var/log/kolla respectively)
Remove databases used by the service you're deleting
You can remove the HAproxy config files for each service you're removing.
I know this is perhaps not in the spirit of automating the Openstack management with Ansible, but I've done this a few times without too many problems. I would avoid removing core services like Keystone, Neutron, Nova, Mariadb or Rabbitmq though because if you're doing that you're destroying your entire Openstack deployment anyways.
You can run the cleanup-host and cleanup-containers scripts on hosts running your containers, but those remove everything related to Kolla-Ansible. If you want to remove a specific service, you could modify those scripts though. I'm aware certain services like Nova, Neutron, Openvswitch and Zun reconfigure the host too for networking but I haven't been able to find a reliable way to revert those changes, and cleanup-host/cleanup-containers don't address those either. If you stop and delete the openvswitch containers, Openvswitch's interfaces go away on the next host reboot and that may be a viable method for you too. Remember Kolla-Ansible loads the openvswitch kernel module persistently so that's something else you may want to remove as well.
I was also struggling with such scenario recently and I've found just these:
https://bugs.launchpad.net/kolla-ansible/+bug/1874044
https://review.opendev.org/c/openstack/kolla-ansible/+/504592
Unfortunately, seems like a work already started some time ago, but no big progress has been done yet.

Creating new VM nodes, is this vagrant or puppet?

I have an 8-cpu server and I installed Centos 7 on it. I would like to dynamically and programmatically spin up and down VM nodes to do work, ex. Hadoop nodes.
Is the technology I use for this Vagrant or Puppet, or something else? I have played around with Vagrant, but it appears that every new node requires a new directory in the file system, I can't just spin up a new VM as far as I can tell with an API call, I think. And it doesn't look like there's even a real API for Vagrant, just machine-readable output. And if I understand it properly, Puppet deals with configuration management for pre-existing nodes.
Is either of these the correct technology to use or is there something else that is more fitting to what I want to do?
Yes, you can use vagrant to spin up a new vm. Configuration of that particular vm can be done using puppet. Take a look at: https://www.vagrantup.com/docs/provisioning/puppet_apply.html
And if you're problem is having separate directories for each vm, you're looking for a multimachine setup: https://www.vagrantup.com/docs/multi-machine/
For an example using the multiserver setup take a look at https://github.com/mlambrichs/graphite-vagrant/blob/master/Vagrantfile
In the config directory you'll find a yaml file which defines an array that you can use to loop over different vm's.

Managing multiple environments with Vagrant

I want to setup 2 droplets at digital ocean, and I'm thinking about use Vagrant to handle the configuration.
It looks like a good way to go, once digital ocean provides both the box and the "runtime"/provider environment.
I was thinking about having a staging droplet/env where I would use chef to install tools like nginx, ruby, etc.
When vagrant provision/recipes works ok, I would like vagrant to run the provision again, but now targeting my production droplet/env.
How can I achieve this behavior? Is it possible? Do I need to have multiple folders in my local machine? (e.g, ~/vagrant/stage and ~/vagrant/production)
Thank you.
You may want to revisit your actual deployment use case, I doubt you want to unconditionally provision & deploy both the staging & production droplets at the same time.
If you'd like to provision a Digital Ocean droplet to use as your development environment, there is a provider located here
A more common strategy would be to provision your environment locally (using Ansible, Chef etc) and then use vagrant push as a way to create an environment specific deployment i.e vagrant push staging provisions and deploys against all host marked as a staging server. Inventories within Ansible cover one way to describe this separation.

In a vagrant/ansible set up, who is responsible for starting servers (nodejs, rails)

Our infrastructure is getting pretty complex with many moving pieces so I'm setting up Vagrant with Ansible to spin up development environments.
My question is who (Vagrant or Ansible or another tool) should be responsible for starting various such as
rails s (for starting rails server)
nginx
nodejs (for seperate API)
I think the answer you're looking for is Ansible (or another tool).
Vagrant has capabilities to run scripts and start services. Once you add a configuration management tool, it should do exactly that. That's part of its job: starting and managing services.
You want the same application configuration regardless of the machine you're spinning up (ESXi, Amazon EC2, Vagrant, whatever), and the best way to do that is outside of Vagrant.

Continuous deployment & AWS autoscaling using Ansible (+Docker ?)

My organization's website is a Django app running on front end webservers + a few background processing servers in AWS.
We're currently using Ansible for both :
system configuration (from a bare OS image)
frequent manually-triggered code deployments.
The same Ansible playbook is able to provision either a local Vagrant dev VM, or a production EC2 instance from scratch.
We now want to implement autoscaling in EC2, and that requires some changes towards a "treat servers as cattle, not pets" philosophy.
The first prerequisite was to move from a statically managed Ansible inventory to a dynamic, EC2 API-based one, done.
The next big question is how to deploy in this new world where throwaway instances come up & down in the middle of the night. The options I can think of are :
Bake a new fully-deployed AMI for each deploy, create a new AS Launch config and update the AS group with that. Sounds very, very cumbersome, but also very reliable because of the clean slate approach, and will ensure that any system changes the code requires will be here. Also, no additional steps needed on instance bootup, so up & running more quickly.
Use a base AMI that doesn't change very often, automatically get the latest app code from git upon bootup, start webserver. Once it's up just do manual deploys as needed, like before. But what if the new code depends on a change in the system config (new package, permissions, etc) ? Looks like you have to start taking care of dependencies between code versions and system/AMI versions, whereas the "just do a full ansible run" approach was more integrated and more reliable. Is it more than just a potential headache in practice ?
Use Docker ? I have a strong hunch it can be useful, but I'm not sure yet how it would fit our picture. We're a relatively self-contained Django front-end app with just RabbitMQ + memcache as services, which we're never going to run on the same host anyway. So what benefits are there in building a Docker image using Ansible that contains system packages + latest code, rather than having Ansible just do it directly on an EC2 instance ?
How do you do it ? Any insights / best practices ?
Thanks !
This question is very opinion based. But just to give you my take, I would just go with prebaking the AMIs with Ansible and then use CloudFormation to deploy your stacks with Autoscaling, Monitoring and your pre-baked AMIs. The advantage of this is that if you have most of the application stack pre-baked into the AMI autoscaling UP will happen faster.
Docker is another approach but in my opinion it adds an extra layer in your application that you may not need if you are already using EC2. Docker can be really useful if you say want to containerize in a single server. Maybe you have some extra capacity in a server and Docker will allow you to run that extra application on the same server without interfering with existing ones.
Having said that some people find Docker useful not in the sort of way to optimize the resources in a single server but rather in a sort of way that it allows you to pre-bake your applications in containers. So when you do deploy a new version or new code all you have to do is copy/replicate these docker containers across your servers, then stop the old container versions and start the new container versions.
My two cents.
A hybrid solution may give you the desired result. Store the head docker image in S3, prebake the AMI with a simple fetch and run script on start (or pass it into a stock AMI with user-data). Version control by moving the head image to your latest stable version, you could probably also implement test stacks of new versions by making the fetch script smart enough to identify which docker version to fetch based on instance tags which are configurable at instance launch.
You can also use AWS CodeDeploy with AutoScaling and your build server. We use CodeDeploy plugin for Jenkins.
This setup allows you to:
perform your build in Jenkins
upload to S3 bucket
deploy to all the EC2s one by one which are part of the assigned AWS Auto-Scaling group.
All that with a push of a button!
Here is the AWS tutorial: Deploy an Application to an Auto Scaling Group Using AWS CodeDeploy

Resources