Terraform stop command - oracle

I'm new to terraform. I wrote a terraform code to provision an instance to oracle cloud infrastructure and it works fine. Somehow the problem is there is no "stop" command in terraform cmd, they only have "destroy" command.
Is there any way to stop resources instead of destroying it?

if you are looking for a solution, there is no such option currently. You can request for a feature here
As a temporary workaround, you could toy with user_data and update and send a shutdown request using it. i.e. terraform apply

I suggest using Terraform for Provisioning (Apply) and Termination (Destroy). Stopping/Starting the instance through OCI CLI is simple and can be done easily through a simple shell script such as this.
This is less complex and easily maintainable for simpler requirements.
Instance.sh File
oci compute instance action --action $1 --instance-id ocid1.instance.oc1.iad.an.....7d3xamffeq
Start Command:
$ source Instance.sh start
Stop Command:
$ source Instance.sh stop

You also have an option of setting up Functions that can manage the lifecycle actions. There is already a public solution such as the one below in case you would like to pursue further.
https://github.com/AnykeyNL/OCI-AutoScale/blob/master/AutoScaleALL.py

Related

Goland IDE build and run with aws-vault

Trying to google something for Goland vs Golang is proving to be quite hard. Everything I am searching seems to come back for code or switching profiles. That is all already handled.
I had a project that was taking in json and processing the data. I was able to use the run and debug button to build and debug my go code with the default configuration.
That changed I am pulling data files from S3 and that requires authentication to aws which we use aws-vault for.
The issue I am running into is in this configuration there is no additional settings. There is a checkbox to Run after build but no way for me to say Run with aws-vault
Now I have to uncheck Run after build and add the flag
-gcflags="-N -l" -o app
and then attach to that process with Shift + Option + fn + F5.
What I am looking for is being able to run aws-vault exec user -- go ... within the IDE so I do not have a build step, a run step and then manually attaching to the process.
Figured out at least what I feel is a better solution that allows you to run any code (including cli) that is using an AWS SDK.
I am on a mac so osascript works for me but the prompt can be whatever your os supports. Or if you have a Yubikey you can use prompt=ykman.
In ~/.aws there are 2 files config and credentials these tell the SDK how to auth.
To start in ~/.aws/config there is a profile for each role that is needed. Default is a role that you assume all the others are ones that the code would escalate to.
[default]
output=json
region=<your region>
mfa_serial=arn:aws:iam::<you>
[profile dev-base]
source_profile=default
role_arn=arn:aws:iam::<account to escalate to>
[profile staging-base]
source_profile = default
role_arn = arn:aws:iam::<account to escalate to>
[dev]
region = <your region>
[staging]
region = <your region>
Note: one oddity is that I had to put the role in this file with the region so that the role exists.
This may not be needed if you are not using java. You could put the full role in the previous file (but I also use java so this is my setup) in ~/.aws/credentials
[dev]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec dev-base -j --prompt=osascript
[staging]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec master-base -j --prompt=osascript
Note: An oddity here is that ca_bundle is specified. Something in golang was not happy with using the AWS_CA_BUNDLE and this appears to work.
Now when the code is ran a pop-up displays asking for an MFA token.
Also, when running any aws cli command you can use the --profile ie aws s3 ls --profile dev that you want to use and the pop-up will appear.
Editing these file manually when using aws-vault might not be the best way to do it but at the moment this is how we manage them and this seems to give the best workflow.

Missing feature in AWS CLI step

I'm using Run an AWS CLI Script with referenced package. But such step does not allow to add feature .NET Configuration Variables, however plain RUN A SCRIPT allows it. Could be I'm missing something, but is this possible to enable somehow that feature for Run an AWS CLI Script?

How do I use a CloudFormation output value in a script with Ansible

I'm trying to set up some automation for a school project. The gist of it is:
Install an EC2 instance via CloudFormation. Then
Use cfn-init to
Install a very basic Ansible configuration
Download an Ansible playbook from S3
Run said playbook to install a Redshift cluster via CloudFormation
Install some necessary packages
Install some necessary Python modules
Download a Python script that will
Connect to the Redshift database
Create a table
Use the COPY command to import data into the table
It all works up to the point of executing the script. Doing so manually works a treat, but that is because I can copy the created Redshift endpoint into the script for the database connection. The issue I have is that I don't know how to extract that output value from CloudFormation so it can be inserted it into the script for a fully automated (save the initial EC2 deployment) solution.
I see that Ansible has at least one means of doing so (cloudformation_facts, for instance), but I'm a bit foggy on how to implement it. I've looked at examples but it hasn't become any clearer. Without context I'm lost and so far all I've seen are standalone snippets.
In order to ensure an answer is listed:
I figured out the describe-stacks and describe-stack-resources sub-commands to the aws cloudformation cli command. Using these, I was able to track down the information I needed. In particular, I needed to access a role. This is the command that I used:
aws cloudformation describe-stacks --stack-name=StackName --region=us-west-2 \
--query 'Stacks[0].Outputs[?OutputKey==`RedshiftClusterEndpointAddress`].OutputValue' \
--output text
I first used the describe-stacks subcommand to get a list of my stacks. The relevant stack is the first in the list (an array) so I used Stacks[0] at the top of my query for the describe-stack-recources subcommand. I then used Outputs since I am interested in a value from the CloudFormation output list. I know the name of the key (RedshiftClusterEndpointAddress), so I used that as the parameter. I then used OutputValue to return the value of RedshiftClusterEndpointAddress.

Initialization script on Elastic Beanstalk instances

I have in AWS some instances that are managed by Beanstalk. But I need to include a script in these instances so that as soon as it gets terminate and rebooted again it runs my script. Where and how can I configure this?
.ebextensions might be what you're looking for. You can set up ebextensions config files to do a lot of things, from package installs and placing files in place to raw bash commands.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html?shortFooter=true
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html?shortFooter=true

Docker run script in host on docker-compose up

My question relates to best practices on how to run a script on a docker-compose up directive.
Currently I'm sharing a volume between host and container to allow for the script changes to be visible to both host and container.
Similar to a watching script polling for changes on configuration file. The script has to act on host on changes according to predefined rules.
How could I start this script on a docker-compose up directive or even from the Dockerfile of the service, so that whenever the container goes up the "watcher" can find any changes being made and writing to.
The container in question will always run over a Debian / Ubuntu OS and should be architecture independent, meaning it should be able to run on ARM as well.
I wish to run a script on the Host, not inside the container. I need the Host to change its network interface configurations to easily adapt any environment The HOST needs to change I repeat.. This should be seamless to the user, and easily editable on a Web interface running Inside a CONTAINER to adapt to new environments.
I currently do this with a script running on the host based on crontab. I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up.
I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up
It seems that there is no best practice that can be applied to your case. A workaround proposed here: How to run shell script on host from docker container? is to use a client/server trick.
The host should run a small server (choose a port and specify a request type that you should be waiting for)
The container, after it starts, should send this request to that server
The host should then run the script / trigger the changes you want
This is something that might have serious security issues, so use at your own risk.
The script needs to run continuously in the foreground.
In your Dockerfile use the CMD directive and define the script as the parameter.
When using the cli, use docker run -d IMAGE SCRIPT
You can create an alias for docker-compose up. Put something like this in ~/.bash_aliases (in Ubuntu):
alias up="docker-compose up; ~/your_script.sh"
I'm not sure if running scripts on the host from a container is possible, but if it's possible, it's a severe security flaw. Containers should be isolated, that's the point of using containers.

Resources