Packer Error: "amazon-ebs: Error querying AMI: AuthFailure" - windows

I'm trying to create the packer build seen here: https://www.packer.io/intro/getting-started/build-image.html
PS C:\dev\tutorials\packer> packer build -var 'aws_access_key=AKI---------' -var 'aws_secret_key==+---------------------------------------' example.json
amazon-ebs output will be in this color.
==> amazon-ebs: Prevalidating AMI Name...
==> amazon-ebs: Error querying AMI: AuthFailure: AWS was not able to validate the provided access credentials
==> amazon-ebs: status code: 401, request id: []
Build 'amazon-ebs' errored: Error querying AMI: AuthFailure: AWS was not able to validate the provided access credentials
status code: 401, request id: []
==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: Error querying AMI: AuthFailure: AWS was not able to validate the provided access credentials
status code: 401, request id: []
==> Builds finished but no artifacts were created.
I've tried giving my user the AmazonEC2FullAccess policy in AWS.
Is my command correct?
I am on windows 8, using powershell.

Thank you for your help #MathiasR.Jessen.
The credentials come in a credentials.csv file. I opened this file with Excel, clicked column C2 and copied the value there. This is an issue because excel prepends an equals sign at the beginning of the value box :/

Related

GlueContext write_dynamic_frame fails after adding BucketOwnerFullControl configuration

I am trying to write dataframe into different account s3 bucket as json output.
The following code is failing with S3 Access denied error in GLUE Spark Streaming job. But If I run the code without first line in following code, it works and output it written into S3 bucket
glueContext._jsc.hadoopConfiguration().set("fs.s3.canned.acl", "BucketOwnerFullControl")
glueContext.write_dynamic_frame.from_options(frame=dynamic_df, connection_type="s3",
connection_options={"path": output_path},
format=file_format, transformation_ctx="datasink")
Here is the error log:
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception:
Access Denied (Service: Amazon S3; Status Code: 403; Error Code:
AccessDenied; Request ID: W52125NY7G3EF7WH; S3 Extended Request ID:
4t9JOJedv2qNRy6W8ySxdQQ7r+TMN1MWpZCFOK1IKO6W4gx4a2oKuK5vwXUPnh4HkkPAG+LnEIc=;
Proxy: null), S3 Extended Request ID: 4t9JOJedv2qUPnh4HkkPAG+LnEIc= at
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1819)
at
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1403)
at
This looks strange to me as Bucket has full permission and it works perfectly when 2nd line alone executed but bucket owner is still glue account which I am trying to change it using fs.s3.canned.acl.
The destination bucket also setup with Bucket owner preferred option
Please suggest me what I am doing wrong.
Thanks
It's not enough to have permission in bucket policies only.
Glue role was missing s3:PutObjectAcl permission in IAM.

Using CDK deploy cannot read temporary S3 bucket that holds Lambda Code

When I deploy a Lambda "code" using CDK the deploy process (cloudformation running under presumably my user) does not have seem to have access to the bucket that holds the Lambda code.
I followed this tutorial: https://intro-to-cdk.workshop.aws/what-is-cdk.html and see this error when I run cdk deploy:
Lambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-19kn1ypcmzq2q/assets/5327df
Lambda Code:
const handler = new lambda.Function(this, "TimestreamLambda", {
runtime: lambda.Runtime.NODEJS_10_X,
code: lambda.Code.fromAsset(path.join(__dirname, '../resources')),
handler: "index.hello_world",
...
cdk and #aws-cdk version is 1.73.0 but I also tried with 1.71.0
Notes:
I see the bucket under my account (in my region).
When logged into this account I can see and download the asset file
the downloaded zip file has the correct contents.
More error details:
12/24 | 9:15:19 PM | CREATE_FAILED | AWS::Lambda::Function | TimestreamLambda (TimestreamLambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-28hiljazvaim/assets/5327df740bdc9c380ff567xxxxxxxxxxx7a68a.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied (Service: AWSLambdaInternal; Status Code: 403; Error Code: AccessDeniedException; Request ID: 1b813776-7647-4767-89bc-XXXXXXXXX; Proxy: null)
new Function (/Users/<user>/dev/cdk/cdk-workshop/node_modules/#aws-cdk/aws-lambda/lib/function.ts:593:35)
\_ new CdkWorkshopStack (/Users/<user>/dev/cdk/cdk-workshop/lib/cdk-workshop-stack.ts:33:21)
I also see this (using the -v option) during deploy:
env: {
CDK_DEFAULT_REGION: 'us-west-2',
CDK_DEFAULT_ACCOUNT: '94646XXXXX',
CDK_CONTEXT_JSON: '{"#aws-cdk/core:enableStackNameDuplicates":"true","aws-cdk:enableDiffNoFail":"true","#aws-cdk/core:stackRelativeExports":"true","aws:cdk:enable-path-metadata":true,"aws:cdk:enable-asset-metadata":true,"aws:cdk:version-reporting":true,"aws:cdk:bundling-stacks":["*"]}',
CDK_OUTDIR: 'cdk.out',
CDK_CLI_ASM_VERSION: '7.0.0',
CDK_CLI_VERSION: '1.73.0'
}
As it turns out this was an issue with the internal authentication system my company uses to access AWS. Instead of using my regular AWS account to access I had to create a temporary account (which also sets a temporary token).

Terraform azurerm_virtual_machine_extension error "extension operations are disallowed"

I have written a Terraform template that creates an Azure Windows VM. I need to configure the VM to Enable PowerShell Remoting for the release pipeline to be able to execute Powershell scripts. After the VM is created I can RDP to the VM and do everything I need to do to enable Powershell remoting, however, it would be ideal if I could script all of that so it could be executed in a Release pipeline. There are two things that prevent that.
The first, and the topic of this question is, that I have to run "WinRM quickconfig". I have the template working such that when I do RDP to the VM, after creation, that when I run "WinRM quickconfig" I receive the following responses:
WinRM service is already running on this machine.
WinRM is not set up to allow remote access to this machine for management.
The following changes must be made:
Configure LocalAccountTokenFilterPolicy to grant administrative rights remotely to local users.
Make these changes [y/n]?
I want to configure the VM in Terraform so LocalAccountTokenFilterPolicy is set and it becomes unnecessary to RDP to the VM to run "WinRM quickconfig". After some research it appeared I might be able to do that using the resource azure_virtual_machine_extension. I add this to my template:
resource "azurerm_virtual_machine_extension" "vmx" {
name = "hostname"
location = "${var.location}"
resource_group_name = "${var.vm-resource-group-name}"
virtual_machine_name = "${azurerm_virtual_machine.vm.name}"
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
# "commandToExecute": "powershell Set-ItemProperty -Path 'HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Policies\\System' -Name 'LocalAccountTokenFilterPolicy' -Value 1 -Force"
}
SETTINGS
}
When I apply this, I get the error:
Error: compute.VirtualMachineExtensionsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="This operation cannot be performed when extension operations are disallowed. To allow, please ensure VM Agent is installed on the VM and the osProfile.allowExtensionOperations property is true."
I couldn't find any Terraform documentation that addresses how to set the allowExtensionOperations property to true. On a whim, I tried adding the property "allow_extension_operations" to the os_profile block in the azurerm_virtual_machine resource but it is rejected as an invalid property. I also tried adding it to the os_profile_windows_config block and isn't valid there either.
I found a statement on Microsoft's documentation regarding the osProfile.allowExtensionOperations property that says:
"This may only be set to False when no extensions are present on the virtual machine."
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.management.compute.models.osprofile.allowextensionoperations?view=azure-dotnet
This implies to me that the property is True by default but it doesn't actually say that and it certainly isn't acting like that. Is there a way in Terraform to set osProfile.alowExtensionOperations to true?
Running into the same issue adding extensions using Terraform, i created a Windows 2016 custom image,
provider "azurerm" version ="2.0.0"
Terraform 0.12.24
Terraform apply error:
compute.VirtualMachineExtensionsClient#CreateOrUpdate: Failure sending request: StatusCode=0
-- Original Error: autorest/azure: Service returned an error.
Status=<nil>
Code="OperationNotAllowed"
Message="This operation cannot be performed when extension operations are disallowed. To allow, please ensure VM Agent is installed on the VM and the osProfile.allowExtensionOperations property is true."
I ran into same error, possible solution depends on 2 things here.
You have to pass provider "azurerm" version ="2.5.0 and you have to pass os_profile_windows_config (see below) parameter in virtual machine resource as well. So, that terraform will consider the extensions that your are passing. This fixed my errors.
os_profile_windows_config {
provision_vm_agent = true
}

How to write a policy in .yaml for a python lambda to read from S3 using the aws sam cli

I am trying to deploy a python lambda to aws. This lambda just reads files from s3 buckets when given a bucket name and file path. It works correctly on the local machine if I run the following command:
sam build && sam local invoke --event testfile.json GetFileFromBucketFunction
The data from the file is printed to the console. Next, if I run the following command the lambda is packaged and send to my-bucket.
sam build && sam package --s3-bucket my-bucket --template-file .aws-sam\build\template.yaml --output-template-file packaged.yaml
The next step is to deploy in prod so I try the following command:
sam deploy --template-file packaged.yaml --stack-name getfilefrombucket --capabilities CAPABILITY_IAM --region my-region
The lambda can now be seen in the lambda console, I can run it but no contents are returned, if I change the service role manually to one which allows s3 get/put then the lambda works. However this undermines the whole point of using the aws sam cli.
I think I need to add a policy to the template.yaml file. This link here seems to say that I should add a policy such as one shown here. So, I added:
Policies: S3CrudPolicy
Under 'Resources:GetFileFromBucketFunction:Properties:', I then rebuild the app and re-deploy and the deployment fails with the following errors in cloudformation:
1 validation error detected: Value 'S3CrudPolicy' at 'policyArn' failed to satisfy constraint: Member must have length greater than or equal to 20 (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: unique number
and
The following resource(s) failed to create: [GetFileFromBucketFunctionRole]. . Rollback requested by user.
I delete the stack to start again. My thoughts were that 'S3CrudPolicy' is not an off the shelf policy that I can just use but something I would have to define myself in the template.yaml file?
I'm not sure how to do this and the docs don't seem to show any very simple use case examples (from what I can see), if anyone knows how to do this could you post a solution?
I tried the following:
S3CrudPolicy:
PolicyDocument:
-
Action: "s3:GetObject"
Effect: Allow
Resource: !Sub arn:aws:s3:::${cloudtrailBucket}
Principal: "*"
But it failed with the following error:
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Invalid template property or properties [S3CrudPolicy]
If anyone can help write a simple policy to read/write from s3 than that would be amazing? I'll need to write another one so get lambdas to invoke others lambdas as well so a solution here (I imagine something similar?) would be great? - Or a decent, easy to use guide of how to write these policy statements?
Many thanks for your help!
Found it!! In case anyone else struggles with this you need to add the following few lines to Resources:YourFunction:Properties in the template.yaml file:
Policies:
- S3CrudPolicy:
BucketName: "*"
The "*" will allow your lambda to talk to any bucket, you could switch for something specific if required. If you leave out 'BucketName' then it doesn't work and returns an error in CloudFormation syaing that S3CrudPolicy is invalid.

update existing infrastructure on heroku using terraform

I've got this infrastructure description
variable "HEROKU_API_KEY" {}
provider "heroku" {
email = "sebastrident#gmail.com"
api_key = "${var.HEROKU_API_KEY}"
}
resource "heroku_app" "default" {
name = "judge-re"
region = "us"
}
Originally I forgot to specify buildpack. It created the application on heroku. I decided to add it to resource entry
buildpacks = [
"heroku/java"
]
But when I try to apply the plan in terraform I get this error
Error: Error applying plan:
1 error(s) occurred:
* heroku_app.default: 1 error(s) occurred:
* heroku_app.default: Post https://api.heroku.com/apps: Name is already taken
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Terraform plan looks like this
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ heroku_app.judge_re
id: <computed>
all_config_vars.%: <computed>
buildpacks.#: "1"
buildpacks.0: "heroku/java"
config_vars.#: <computed>
git_url: <computed>
heroku_hostname: <computed>
name: "judge-re"
region: "us"
stack: <computed>
web_url: <computed>
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
As a workaround I tried to add destroy in my deploy.sh script
terraform init
terraform plan
terraform destroy -force
terraform apply -auto-approve
But it does not destroy the resource as I get the message Destroy complete! Resources: 0 destroyed.
What is the problem?
Link to build
It looks like you also changed the name of the resource. Your original example has the resource name heroku_app.default while your plan has heroku_app.judge_re.
To point your state to the remote resource, so Terraform knows you are editing and not trying to recreate the resource, use terraform import:
terraform import heroku_app.judge_re judge-re
In terraform, normally you needn't destroy the whole stack, which you just want to re-build one or several resources in it.
terraform taint does this trick. The terraform taint command manually marks a Terraform-managed resource as tainted, forcing it to be destroyed and recreated on the next apply.
terraform taint heroku_app.default
Second, when you troubleshooting why the resource isn't list in destroy resource, please make sure you point to the right terraform tfstate file.
when you run terraform plan, did you see any resources which already was created?

Resources