I'm using Run an AWS CLI Script with referenced package. But such step does not allow to add feature .NET Configuration Variables, however plain RUN A SCRIPT allows it. Could be I'm missing something, but is this possible to enable somehow that feature for Run an AWS CLI Script?
Related
Trying to google something for Goland vs Golang is proving to be quite hard. Everything I am searching seems to come back for code or switching profiles. That is all already handled.
I had a project that was taking in json and processing the data. I was able to use the run and debug button to build and debug my go code with the default configuration.
That changed I am pulling data files from S3 and that requires authentication to aws which we use aws-vault for.
The issue I am running into is in this configuration there is no additional settings. There is a checkbox to Run after build but no way for me to say Run with aws-vault
Now I have to uncheck Run after build and add the flag
-gcflags="-N -l" -o app
and then attach to that process with Shift + Option + fn + F5.
What I am looking for is being able to run aws-vault exec user -- go ... within the IDE so I do not have a build step, a run step and then manually attaching to the process.
Figured out at least what I feel is a better solution that allows you to run any code (including cli) that is using an AWS SDK.
I am on a mac so osascript works for me but the prompt can be whatever your os supports. Or if you have a Yubikey you can use prompt=ykman.
In ~/.aws there are 2 files config and credentials these tell the SDK how to auth.
To start in ~/.aws/config there is a profile for each role that is needed. Default is a role that you assume all the others are ones that the code would escalate to.
[default]
output=json
region=<your region>
mfa_serial=arn:aws:iam::<you>
[profile dev-base]
source_profile=default
role_arn=arn:aws:iam::<account to escalate to>
[profile staging-base]
source_profile = default
role_arn = arn:aws:iam::<account to escalate to>
[dev]
region = <your region>
[staging]
region = <your region>
Note: one oddity is that I had to put the role in this file with the region so that the role exists.
This may not be needed if you are not using java. You could put the full role in the previous file (but I also use java so this is my setup) in ~/.aws/credentials
[dev]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec dev-base -j --prompt=osascript
[staging]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec master-base -j --prompt=osascript
Note: An oddity here is that ca_bundle is specified. Something in golang was not happy with using the AWS_CA_BUNDLE and this appears to work.
Now when the code is ran a pop-up displays asking for an MFA token.
Also, when running any aws cli command you can use the --profile ie aws s3 ls --profile dev that you want to use and the pop-up will appear.
Editing these file manually when using aws-vault might not be the best way to do it but at the moment this is how we manage them and this seems to give the best workflow.
I'm new to terraform. I wrote a terraform code to provision an instance to oracle cloud infrastructure and it works fine. Somehow the problem is there is no "stop" command in terraform cmd, they only have "destroy" command.
Is there any way to stop resources instead of destroying it?
if you are looking for a solution, there is no such option currently. You can request for a feature here
As a temporary workaround, you could toy with user_data and update and send a shutdown request using it. i.e. terraform apply
I suggest using Terraform for Provisioning (Apply) and Termination (Destroy). Stopping/Starting the instance through OCI CLI is simple and can be done easily through a simple shell script such as this.
This is less complex and easily maintainable for simpler requirements.
Instance.sh File
oci compute instance action --action $1 --instance-id ocid1.instance.oc1.iad.an.....7d3xamffeq
Start Command:
$ source Instance.sh start
Stop Command:
$ source Instance.sh stop
You also have an option of setting up Functions that can manage the lifecycle actions. There is already a public solution such as the one below in case you would like to pursue further.
https://github.com/AnykeyNL/OCI-AutoScale/blob/master/AutoScaleALL.py
I use packer to build Windows VM image on AWS,
To setup WinRM I want to use "official" ConfigureRemotingForAnsible.ps1 script
But if I put it to Packer's user_data_file I doesn't work
It works only if I add manually <powershell>...</powershell> to the first and last lines of ConfigureRemotingForAnsible.ps1 script which is not convenient (I'd prefer to refer to the latest ConfigureRemotingForAnsible.ps1 maintained by Ansible guys)
Any ideas how to get rid of <powershell>...</powershell> lines?
I'm trying to set up some automation for a school project. The gist of it is:
Install an EC2 instance via CloudFormation. Then
Use cfn-init to
Install a very basic Ansible configuration
Download an Ansible playbook from S3
Run said playbook to install a Redshift cluster via CloudFormation
Install some necessary packages
Install some necessary Python modules
Download a Python script that will
Connect to the Redshift database
Create a table
Use the COPY command to import data into the table
It all works up to the point of executing the script. Doing so manually works a treat, but that is because I can copy the created Redshift endpoint into the script for the database connection. The issue I have is that I don't know how to extract that output value from CloudFormation so it can be inserted it into the script for a fully automated (save the initial EC2 deployment) solution.
I see that Ansible has at least one means of doing so (cloudformation_facts, for instance), but I'm a bit foggy on how to implement it. I've looked at examples but it hasn't become any clearer. Without context I'm lost and so far all I've seen are standalone snippets.
In order to ensure an answer is listed:
I figured out the describe-stacks and describe-stack-resources sub-commands to the aws cloudformation cli command. Using these, I was able to track down the information I needed. In particular, I needed to access a role. This is the command that I used:
aws cloudformation describe-stacks --stack-name=StackName --region=us-west-2 \
--query 'Stacks[0].Outputs[?OutputKey==`RedshiftClusterEndpointAddress`].OutputValue' \
--output text
I first used the describe-stacks subcommand to get a list of my stacks. The relevant stack is the first in the list (an array) so I used Stacks[0] at the top of my query for the describe-stack-recources subcommand. I then used Outputs since I am interested in a value from the CloudFormation output list. I know the name of the key (RedshiftClusterEndpointAddress), so I used that as the parameter. I then used OutputValue to return the value of RedshiftClusterEndpointAddress.
Our containers are hosted using Google Container Registry, and I am using id "com.bmuschko.docker-java-application" version "3.0.7" to build and deploy docker containers. However, I run into permission issues whenever I try to pull the base image or push the image to GCR (I am able to get to the latter step by pulling the image and having it available locally).
I'm a little bit confused by how I can properly configure a particular GCloud account to be used whenever issuing any Docker related calls over a wire using the plugin.
As a first attempt, I've tried to create a task that precedes and build or push commands:
task gcloudLogin(type:Exec) {
executable "gcloud"
args "auth", "activate-service-account", "--key-file", "$System.env.KEY_FILE"
}
However, this simple wrapper doesn't work as desired. Is there currently a supported way to have this plugin work with GCR?
Got in touch with the maintainers of the gradle docker plugin and we have found this to be a valid solution.