what is the best way to install Ansible from Terraform [closed] - ansible

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Hi I am fairly new to the Ansible and Terraform world. We need to install Ansible from Terraform script. I have the script to install Ansible but want to know what is the best way to call that script from Terraform to make sure the script is called after the Virtual Machine is created. I have been reading about local-exec and remote-exec methods but they are not recommended by TF. Any help on the topic would be appreciated!

From terraform yeah, you don't have much of a choice really.
The only way to do it is to use remote-exec provisioners as you said.
They are not recommended because terraform can't guarantee that they'll succeed. What I mean by this is, if the commands you give to provisioner fail, terraform will move on because it can't know the exit status of provisioners and commands you gave them.
What I would recommend you if you are able to and if that matches your use case is to use packer.
Packer is used for making images which later you use as a template for your instances. You configure your image by passing commands and/or scripts to the packer. That way you can run sudo apt install ansible or whatever the command is once in packer and from that point use that image wherever you need.

Related

How do I set up Deployer to sync only specified folders from localhost to production? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have a localhost Laravel project I want to deploy with Deployer. Currently I do it manually using an FTP tool and only sync the app, resources folders and seems to work just fine.
I want to use Deployer or some other tool I can run from terminal to sync or upload new files to the server.
Can someone help with a recipe or advice?
Do I need rsync setup using deployer or is there a way to do it without recipe/rsync.php?
Here are the steps I want configured(for now):
connect to the server, I have ssh access and I can probably configure a key
setup the 2 3 folders I want to sync, as well as files that need to be ignored.
These seem like simple tasks but for some reason I have a hard time setting them up.
Thank you
I don't know if this questions is still pending for answer, but one alternative is using some versioning tool like git, you only watch some folders and ignore the remaining. and with the basic recipe you can deploy a github/gilab/bitbucket project.
A more in dept explanation on this topic can be found [enter link description here]here1.

Running a ruby program on AWS [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 days ago.
Improve this question
I have a Ruby code that I would like to run on AWS. Is is possible to run Ruby code on AWS?
You can run Ruby scripts on anything that has Ruby installed.
AWS (Amazon Web Services) provides you with a suite of tools to host servers (among other things). So your question "is it possible to run a simple ruby code on amazon AWS?" makes almost no sense at all. However, if you host an AWS server with some operating system, such as Ubuntu, then all you need to do is install Ruby, but it typically comes pre-installed on Linux.

Best practices for developing locally? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
When I'm developing locally. What are the best practices before moving to a remote server?
Some practices are:
First, put your machine name (hostname command in linux/mac) in bootstrap/start.php
Copy any config file you want make changes that should apply to local environment in a folder called local (in config folder)
Do not forget to put database file or any other file that contains sensitive information in .gitignore before putting in version control.
Files that are called env.*.php will put environment variables to the environment you are in, env.local.php for example.
What I want to ask about also, is , should I put composer packages that help in development in require-dev, do you do that?
If help in development means packages you don't really need in production, then YES.
You composer.json can be in production exactly the same you have in your development environment. So all those packages you use only for development, like testing package (phpspec, phpunit, behat...), must be added to the require-dev section and can safely stay there. But you also have to remember to update and install your packages by running
composer install --no-dev
Since --dev is default in Composer: https://github.com/composer/composer/blob/1.0.0-alpha7/CHANGELOG.md#100-alpha7-2013-05-04

Scheduling an existing AWS EC2 instance to start/stop [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Right now I am using the Auto Scaling Command Line Tool to launch a new EC2 instance once per day and run a script that terminates itself upon completion. Now I need to do the same thing with a different script, but this one requires several Python modules to be installed. Therefore, I would like to schedule the start/stop of a single, existing instance rather than the launch/termination of a brand new instance. I've scoured Amazon's documentation/blogs but I can't determine if this functionality is supported with Autoscaling. How could this be accomplished?
Its not supported with autoscaling. If you want to keep doing what you are currently doing. You could install the python modules with a cloud init script.
You can also start/stop an existing instance with the command line tools, just not the autoscaling ones.
My eventual solution was to set up an instance the way I wanted and then create an AMI from it. My autoscaling setup then starts/stops an instance of that AMI.

Execute puppet file in a remote server via ssh [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is it possible to execute a puppet in a remote server using SSH?
I don't want to have to install ruby on the remote server.
To my knowledge, only Puppet can interpret Puppet files. I think it is easier to install Puppet's dependencies (including Ruby) than finding/developing a Puppet replacement.
Puppet (and facter) needs to be able to inspect the filesystem, process tables and other kernel tables (to mention just a few things) of the remote server. To to this, it has to be executed on the remote server. ssh does not offer a way to run a command on host A in such a way that it executes on host B with access to host B's resources; it does offer a way to execute a program installed on host B from host A, but that is not what you want.
Puppet comes with an overhead (the space required to install it and its dependencies, plus the memory and CPU time that it consumes); if you don't like the overhead, don't use Puppet.
Note: if it were possible to do what you want, then you'd have saved a small amount of space on host B but would have three new problems:
Significant increase of the load on your puppetmaster, if it has do to all the work.
Still a lot of work on the remote server, as it provides access to the resources
Large increase in network traffic.

Resources