Provisioning with Ansible and Vagrant multiple vagrantfiles - ansible

I'm creating a monitoring environment that has monitoring_servers and monitored_boxes, and of course Ansible controller. For testing roles etc I've created a new "project" that worked well in terms of organizing the development. But now, when most of the stuff is (hopefully) working as should I would love to get the whole Infrastructure easier to manage, if possible, from one file state.
I've been googling this every now and then and IIRC I still haven't found a solution to have one master Vagrantfile which then could call other Vagrantfiles to kickstart needed boxes.
Now there is one Vagrantfile for creating Ansible Controller, 3 ubuntu nodes and 3 Windows nodes, and another to spin up three Ubuntu VM's for Grafana, Loki, and Prometheus. Then there would be needs for an Alert manager, maybe for Influxdb, etc, and keeping all those machines in one vagrant file haven't worked very well for me as I would like to see a situation where there is:
Vagrantfile (master) to create Ansible Controller and from that file, I could call files like "monitoring_stack", "monitored_boxes", "common_purpose_boxes" and so on.
Master
├── Vagrantfile.ansible.controller
└── monitoring
├── monitored_boxes
│   └── Vagrantfile.monitored
├── monitoring_servers
│   └── Vagrantfile.monitoring
└── whatever_boxes
└── Vagrantfile.whatever
Something like that would be an ideal setup to manage.
If that's not doable nor easy to get to are there other methods you normally take to tackle similar setups?
Maybe just forget the whole Vagrant and go full-on with Pulumi or Terraform. Then again, that probably wouldn't solve this issue either as I want to provide a playground for other team members also to test and play with new toys.
Thanks, everyone for any tips :)

Hopefully I'm not too late.
Vagrant supports multi-nodes setup, within the same Vagrantfile:
https://www.vagrantup.com/docs/multi-machine
I'm currently working on a dual-node setup with ansible provisioning (WIP):
https://gitlab.com/UnderGrounder96/gitlab_jenkins

Related

Ansible, creating a role to deploy an app on different machines with different versions and different configs

I need to install PHP on different machines, with different versions (7.1 and 7.4) and different configs, with Ansible.
I would like to use a single role, but with different vars files.
I would also like to use some parameters to deploy 7.1 or 7.4 and based on that param to deploy the correct version with the correct config. The configs do not differ by much (there are differences on files and folders locations).
Is there a way to do this from a single role?
Thank you!
Kind regards.
Yes. Override the variables with those stored in the group_vars or host_vars directories of your inventory.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable

Selenium webdriver and Ruby. Standard directory and file structure

I want to make my Selenium Webdriver Ruby tests suites, test cases, and test methods in separate files, so i can reuse the code between them. Right now I have separate ruby files for every test suite, containing every test case, and every method. This works, but its not the best way to maintain a lot of test suites constantly.
So I wanted to know what is the standard way to do this file separation, from a complete text file, to separate files for test cases and methods.
I found the following structure but don't understand how to use it with my requirements:
.
├── bin (not used)
├── data (not used)
├── doc (not used)
├── etc (I use it to store 3 different HOSTS files i overwrite depending on some parameters)
├── ext (not used)
├── lib (not used)
├── logs (keeps execution logs)
│  └── screenshots (keeps only failed test cases screenshots)
└── tests (test suites... with test data, test cases, and methods, in a
single file per test suite)
I have found the answer I was looking for, because the directory I was more troubled about was the "tests/" directory, where I has all my tests, and the best way to have shared code between them, is to have a module with methods in a "tests/support" or "tests/shared" directory.

Vagrant DB box - what are the best practices?

Our production servers setup is quite standard:
API + WEB + DB servers.
The API is mainly the one to access the DB, but the WEB does that also in certain cases.
I want to create a similar local setup using Vagrant.
This is where I got so far:
I have 2 git projects, a WEB and an API.
I turned them into Vagrant projects, by putting a Vagrantfile in both main directories.. each Vagrantfile points to a dedicated box which includes all the server dependencies.
Both VM's take the code from the mounted vagrant folder. So far - it works like a charm.
Now, i've got to the point where I need to create a VM for the DB, the thing is.. I obviosuly don't have DB git project - where do I put the Vagrantfile in this case? It's very convenient that the Vagrantfile is part of the code.
What are the best practices?
I hope my question makes sense.
Thanks a lot.
I would see 2 possibilities:
anyway, create another Vagrantfile just for the DB even if you do not have code associated, you can still have a git project only for the Vagrantfile.
The downside is that you need to start vagrant from 3 different files, so not the best
put the DB VM into one of API or WEB (maybe WEB would make more sense but depends your project) so when will start 2 VM from the same Vagrantfile.

Is it ok to use ansible for deployement of apps instead of make files

I have recently started using ansible for configuration management of linux servers.
My habbit is that if I learn one tool then I try to use it as much as possible.
Initially for my php web apps I had a long Makefile which used to download, install packages , make php.ini file chnages , extract zip files , copy files between folders etc to deploy my application in as automated way.
Now, I am thinking of converting that Makefile deployment to Ansible because then I can arrange the separate yml file for separate areas rather than one big makefile for the whole project.
I want to know that is it good idea to use ansible for that or Makefile will be good for that.
Sure, Ansible is great for that. You can separate all your different steps into different playbooks that are identified by yaml files.
You can define common tasks and then include them in your specific playbooks.
You can also make use of Ansible roles to create complete set of playbooks depending on the role of the server. For example, one set servers' role could be webservers and another set of servers' role could be databases.
You can find more info on roles here: http://docs.ansible.com/playbooks_roles.html
There are's also a few modules on the web out there that you can also use to get you started and you can also use Ansible Galaxy to import roles.
Of course, you can accomplish the same by breaking down your Makefile but maybe you want to learn a new tool.
Hope it helps.

Several apps (i.e war files) on same Beanstalk instance

In order to be conservative on resources (and costs), I would like to put more than 1 war file (representing different apps) on the same EC2 beanstalk instance.
I would like then to have appl A mapping to myapp.elasticbeanstalk.com/applA using warA and appl B mapping to myapp.elasticbeanstalk.com/applB using warB
But, the console allows you to upload a single and only war for any instance.
1) So, I understand that its not possible with the current interface. Am I right ?
2) Though, is is possible to achieve this via "non-standard" ways: uploading warA via interface and copying / updating warB to /tomcat6/webapps via ssh, ftp, etc ?
3) With (2), my concern is that B will be lost each time BT health checker decides to terminate the instance (successive failed checks for example) and restart a new one. I would then have to make warB as part of my customized AMI used by applA and create a new version of this AMI each time i update warB
Please, help me
regards
didier
You are correct ! You can not (yet ) have multiple war in beanstalk.
Amazon Forum answer is here
https://forums.aws.amazon.com/thread.jspa?messageID=219284
There is a workaround though, but not using Beanstalk, but plain EC2:
https://forums.aws.amazon.com/thread.jspa?messageID=229121
http://blog.jetztgrad.net/2011/02/how-to-customize-an-amazon-elastic-beanstalk-instance/
Shameless plug: While not related directory, I've made a plugin for Maven 2 to automate Beanstalk deployments and Elastic MapReduce as well. Check out http://beanstalker.ingenieux.com.br/
This is an old question but it took me some time to find a more up to date answer so I thought I'd share my findings.
Multiple WAR deployment is now supported natively by Elastic Beanstalk (and has been for some time).
Simply create a new zip file with each of your WAR files inside of it. If you want one of them to be available at the root context name it ROOT.war like you would if you were deploying to Tomcat manually.
Your zip file structure should looks like so:
MyApplication.zip
├── .ebextensions
├── foo.war
├── bar.war
└── ROOT.war
Full details can be found in the Elastic Beanstalk documentation.
The .ebextensions folder is optional and can contain configuration files that customize the resources deployed to your environment. See Elastic Beanstalk Environment Configuration for information on using configuration files.
There another hack which allows you to boot an arbitrary jar by installing java and using a node.js boot script:
http://docs.ingenieux.com.br/project/beanstalker/using-arbitrary-platforms.html
Hope it helps

Resources