Well, I have a client with an intranet infrastructure, that can't be accessed by the internet or VPN, so I need to access through TeamViewer.
This client gave me 10 VMs (Linux Centos 6) to work (can't create others or destroy it). So I need to prepare this infrastructure to run my CI/CD and deliver the software, then I need these services running before my software deploy:
Docker
Mongo DB
Postgres
Nginx
Jenkins
I'm thinking about two options to solve it:
TerraformCLI (remember I will need to access client through Teamviewer and run terraform apply)
Ansible (Here I can list the 10 machines and execute all together with 1 playbook).
I heard about Terraform is more to provision Servers (VM, EC2 ...), VPC, Subnet, LoadBalancers, but Ansible is more about configuring each machine, in a more granular way. If this is correct I think Ansible is the correct choice for me.
Any suggestions guys?
Yes.
Terraform provision your environment from scratch. It is a Infrastructure as Code tool.
Ansible configures your environment. It is a configuration management tool.
Often, people combine both of them. First provision the network stack, servers using Terraoform and then configure the applications inside the servers using Ansible.
You already have the VMs hence opting for configuration management tool(Chef, Ansible, Puppet, Salt Stack) better fits your use case.
Related
I am quite new to Ansible, and I wonder if someone save me some feeling my way in the darkness, and share what is the best way to do the following.
I have several cloud environments with windows (mostly) windows hosts that I want to manage with Ansible. The thing is, that my Ansible server is outside these environments and I can't WinRM directly to the various Windows hosts (security, you know...) So what I would like to do is to add a Linux host to each cloud environment and use these hosts as kind of a proxy: I will access them from the Ansible server and use the psexec module to access the Windows server.
My problem though is that if I do that then my Ansible inventory will include only the Linux "Proxies" and I will not be able to categorize the Windows servers in to policy groups.
So again, can anyone share how to properly handle this? I guess I just need some way to create an inventory-like structure of of the windows severs and associate it with the appropriate "Proxy" hosts.
Many thanks,
Oren
I am a web developer and I am currently using Vagrant + VirtualBox to run my projects. I have a Proliant server at home that I am not using at the moment. I was thinking, is there any way I could use it instead of the VM, so I could run my projects remotely?
P.S: Can you think about any other cool use cases for this server?
Vagrant is design to work with VMs or containers when using a docker provider but not bare metal server simply because the goal is to be able to build, use, destroy and rebuild environment programmatically so using a base-metal server break the main use case.
The possible course of action is to install the hypervisor on your personal server and then configure your Vagrantfile to use the remote provider instead of the local. You'll have as a direct benefit the ability to create a lot of instance since your server will most likely have more resources than your local laptop/desktop workstation.
I need to deploy my Java application to AWS EC2 Instance using terraform. The catch here, we should not use *.pem file to deploy the application.
I try to create ELB and associate instances using terraform.I can able to deploy the application using ssh and pem file to ec2 instances Private IPs. But we shouldn't use *.pem or *.ppk file, as it'll not be allowed in production servers.
I tried using chef with terraform , but that also requires *.pem to connect to AWS Instances.
Please let me know the detailed steps/suggestions of how to deploy the application using terraform without using pem file.
If you can't make any changes to your instance after creating it (including deploying the application) then you will need to bake any and all changes into the AMI that Terraform deploys.
You might want to look into using Packer to create AMIs with your intended configuration and then use Terraform to deploy these AMIs.
For reference, this strategy is known as "immutable infrastructure" so you might want to do some further reading into this area.
If instead it's simply that SSH connectivity is not allowed and you can make changes over other ports then you should be able to use an AMI that has a Chef client, Puppet agent or Salt minion on it (there may well be other tools that work over a non SSH protocol/port but this restriction rules out Ansible) and then use any of those tools to continue to configure your instance. Obviously you could find a suitable AMI from the AMI marketplace or, once again, use Packer to set up the relevant configuration management client.
Assume the following stack:
A dedicated server
The server is running Vagrant
Vagrant is running 2 virtual machines master + minion-1 (Kubernetes)
minion-1 is running a pod
Within the pod is 2 containers: webservice and fileservice
Both webservice and fileservice should be accessible from internet i.e. from outside. Either by web.mydomain.com - file.mydomain.com or www.mydomain.com/web/ - www.mydomain.com/file/
Before using Kubernetes, I was using a remote proxy (HAproxy) and simply mapped domain names to an internal ip / port.
Now with Kubernetes, I can imagine there is something dedicated to this task but I honestly have no clue from where to start.
I read about "createExternalLoadBalancer", kubernetes Services and kube-proxy. Should a reverse-proxy still be put somewhere (before vagrant or within a pod ?) also is using Vagrant a good option for production (staying in the scope of this question) ?
The easiest thing for you to do at the moment is to make a service of type "nodePort", and to configure your HAproxy to point at minion-1:.
createExternalLoadBalancer is the old, less flexible, way to do this--it requires the cloud provider to do work. Type=nodePort doesn't require anything special from the cloud provider.
I need to setup a web server and a database server on EC2.
It should be easy to migrate to another service provider later.
Currently, I have a web server and a database server, each running on separate EC2 micro instances with software installed there remotely.
Can we run a vagrant box on these micro instances with pre-installed and pre-configured softwares like LAMP stack and use that instead. So I will end with 2 vagrant boxes , one for web server another for database server.
Amazon provides already means to copy an instance but it is copied to another EC2 instance only probably .. If there is need to move to some other provider, it will be same process of re-installing all. So, an own virtual box installed on Amazon's virtual box is what i was looking into..
I don't know how good or bad it is.. I doubt if this will affect performance as well. Please share your views. Target is to have env prepared locally and have flexibility to deploy it on any service provider easily.
Running vagrant inside your AWS box is probably not the right solution. Have you looked into the Vagrant AWS provider?
That will allow you to setup and provision your AWS boxes with Vagrant and Puppet or Chef... if you are using Puppet or Chef to provision your servers then you will have a very portable "scripted" install for your servers that can easily be moved to another provider at a later date...
So running a virtual machine, on another virtual machine probably isn't the best. But if you want to install Vagrant on Amazon Linux you can do:
wget https://releases.hashicorp.com/vagrant/2.2.4/vagrant_2.2.4_x86_64.rpm
sudo rpm -ivh vagrant_2.2.4_x86_64.rpm
The RPM is the Centos version from the downloads page here: https://www.vagrantup.com/downloads.html
But then you cannot install virtualbox to run a VM. So it doesn't actually work anyways.