I have code that seems to "almost" deploy. It will fail with the following error:
10:55:25 AM | CREATE_FAILED | AWS::Lambda::EventSourceMapping | QFDSKafkaEventSour...iltynotifyEFE73996
Resource handler returned message: "Invalid request provided: The secret provided in 'sourceAccessConfigurations' is not associated with cluster some-valid-an. Please provide a secret associated with the cluster. (Service: Lambda, Status Code: 400, Request ID: some-uuid )" (RequestToken: some-uuid, HandlerErrorCode: InvalidRequest)
I've cobbled together the cdk stack from multiple tutorials, trying to learn CDK. I've gotten it to the point that I can deploy a lambda, specify one (or more) layers for the lambda, and even specify any of several different sources for triggers. But our production Kafka requires credentials... and I can't figure out for the life of me how to supply those so that this will deploy correctly.
Obviously, those credentials shouldn't be included in the git repo of my codebase. I assume I will have to set up a Secrets Manager secret with part or all of the values. We're using scram-sha-512, and it includes a user/pass pair. The 'secret_name' value to Secret() is probably the name/path of the Secrets Manager secret. I have no idea what the second, unnamed param is for, and I'm having trouble figuring that out. Can anyone point me in the right direction?
Stack code follows:
#!/usr/bin/env python3
from aws_cdk import (
aws_lambda as lambda_,
App, Duration, Stack
)
from aws_cdk.aws_lambda_event_sources import ManagedKafkaEventSource
from aws_cdk.aws_secretsmanager import Secret
class ExternalRestEndpoint(Stack):
def __init__(self, app: App, id: str) -> None:
super().__init__(app, id)
secret = Secret(self, "Secret", secret_name="integrations/msk/creds")
msk_arn = "some valid and confirmed arn"
# Lambda layer.
lambdaLayer = lambda_.LayerVersion(self, 'lambda-layer',
code = lambda_.AssetCode('utils/lambda-deployment-packages/lambda-layer.zip'),
compatible_runtimes = [lambda_.Runtime.PYTHON_3_7],
)
# Source for the lambda.
with open("src/path/to/sourcefile.py", encoding="utf8") as fp:
mysource_code = fp.read()
# Config for it.
lambdaFn = lambda_.Function(
self, "QFDS",
code=lambda_.InlineCode(mysource_code),
handler="lambda_handler",
timeout=Duration.seconds(300),
runtime=lambda_.Runtime.PYTHON_3_7,
layers=[lambdaLayer],
)
# Set up the event (managed Kafka).
lambdaFn.add_event_source(ManagedKafkaEventSource(
cluster_arn=prototype_mks,
topic="foreign.endpoint.availabilty.notify",
secret=secret,
batch_size=100, # default
starting_position=lambda_.StartingPosition.TRIM_HORIZON
))
Looking into a code sample, I understand that you are working with Amazon MSK as an event source, and not just self-managed (cross-account) Kafka.
I assume I will have to set up a Secrets Manager secret with part or all of the values
You don't need to setup credentials. If you use MSK with SALS_SCRAM, you already have credentials, which must be associated with MSK cluster.
As you can see from the doc, you secret name should start with AmazonMSK_, for example AmazonMSK_LambdaSecret.
So, in the code above, you will need to fix this line:
secret = Secret(self, "Secret", secret_name="AmazonMSK_LambdaSecret")
I assume you already aware of the CDK python doc, but will just add here for reference.
Related
I am automating Dialogflow CX using Python client libraries. That includes agent/intent/entity etc. creation/updation/deletion.
But for the first time run, I am encountering the below error from python.
If I login to console and set the location from there and rerun the code, it is working fine. I am able to create agent.
Followed this URL of GCP -
https://cloud.google.com/dialogflow/cx/docs/concept/region
I am looking for code to automate the region & location setting before running the python code. Kindly provide me with the code.
Below is the code I am using to create agent.
Error -
google.api_core.exceptions.FailedPrecondition: 400 com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION"
debug_error_string = "{"created":"#1622183899.891000000","description":"Error received from peer ipv4:142.250.195.170:443","file":"src/core/lib/surface/call.cc","file_line":1068,"grpc_message":"com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION","grpc_status":9}"
main.py -
# Import Libraries
import google.auth
import google.auth.transport.requests
from google.cloud import dialogflowcx as df
from google.protobuf.field_mask_pb2 import FieldMask
import os, time
import pandas as pd
# Function - Authentication
def gcp_auth():
cred, project = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
auth_req = google.auth.transport.requests.Request()
cred.refresh(auth_req)
# Function - Create Agent
def create_agent(agent_name, agent_description, language_code, location_id, location_path):
if location_id == "global":
agentsClient = df.AgentsClient()
else:
agentsClient = df.AgentsClient(client_options={"api_endpoint": f"{location_id}-dialogflow.googleapis.com:443"})
agent = df.Agent(display_name=agent_name, description=agent_description, default_language_code=language_code, time_zone=time_zone, enable_stackdriver_logging=True)
createAgentRequest = df.CreateAgentRequest(agent=agent, parent=location_path)
agent = agentsClient.create_agent(request=createAgentRequest)
return agent```
Currently, Dialogflow does not support configuring the location settings through the API, thus you can not initialise location settings through it. You can only set the location through the Console.
As an alternative, since the location setting has to be initialised only once for each region per project you could set the location and automate the agent creation process, some useful links: 1 and 2.
On the other hand, if you would find this feature useful, you can file a Feature Request, here. It will be evaluated by the Google's product team.
Many thanks Alexandre Moraes. I have raised a feature request for the same.
I am following the Ruby code sample to add a custom metrics to stackdriver, however, I keep getting the permission denied error.
client = Google::Cloud::Monitoring::Metric.new
project_name = Google::Cloud::Monitoring::V3::MetricServiceClient.project_path project_id
descriptor = Google::Api::MetricDescriptor.new(
type: "custom.googleapis.com/my_metric#{random_suffix}",
metric_kind: Google::Api::MetricDescriptor::MetricKind::GAUGE,
value_type: Google::Api::MetricDescriptor::ValueType::DOUBLE,
description: "This is a simple example of a custom metric."
)
result = client.create_metric_descriptor project_name, descriptor
the error I got is "Google::Gax::PermissionDeniedError (GaxError RPC failed, caused by 7:Permission monitoring.metricDescriptors.create denied (or the resource may not exist).)"
The environment variable GOOGLE_APPLICATION_CREDENTIALS is set, and it works fine for the Google Cloud Storage code below
storage = Google::Cloud::Storage.new project: project_id
# Make an authenticated API request
storage.buckets.each do |bucket|
puts bucket.name
end
At this point, I don't know what is the problem. Do I need to set up a different credential for Cloud Monitoring?
I am trying to deploy a python lambda to aws. This lambda just reads files from s3 buckets when given a bucket name and file path. It works correctly on the local machine if I run the following command:
sam build && sam local invoke --event testfile.json GetFileFromBucketFunction
The data from the file is printed to the console. Next, if I run the following command the lambda is packaged and send to my-bucket.
sam build && sam package --s3-bucket my-bucket --template-file .aws-sam\build\template.yaml --output-template-file packaged.yaml
The next step is to deploy in prod so I try the following command:
sam deploy --template-file packaged.yaml --stack-name getfilefrombucket --capabilities CAPABILITY_IAM --region my-region
The lambda can now be seen in the lambda console, I can run it but no contents are returned, if I change the service role manually to one which allows s3 get/put then the lambda works. However this undermines the whole point of using the aws sam cli.
I think I need to add a policy to the template.yaml file. This link here seems to say that I should add a policy such as one shown here. So, I added:
Policies: S3CrudPolicy
Under 'Resources:GetFileFromBucketFunction:Properties:', I then rebuild the app and re-deploy and the deployment fails with the following errors in cloudformation:
1 validation error detected: Value 'S3CrudPolicy' at 'policyArn' failed to satisfy constraint: Member must have length greater than or equal to 20 (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: unique number
and
The following resource(s) failed to create: [GetFileFromBucketFunctionRole]. . Rollback requested by user.
I delete the stack to start again. My thoughts were that 'S3CrudPolicy' is not an off the shelf policy that I can just use but something I would have to define myself in the template.yaml file?
I'm not sure how to do this and the docs don't seem to show any very simple use case examples (from what I can see), if anyone knows how to do this could you post a solution?
I tried the following:
S3CrudPolicy:
PolicyDocument:
-
Action: "s3:GetObject"
Effect: Allow
Resource: !Sub arn:aws:s3:::${cloudtrailBucket}
Principal: "*"
But it failed with the following error:
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Invalid template property or properties [S3CrudPolicy]
If anyone can help write a simple policy to read/write from s3 than that would be amazing? I'll need to write another one so get lambdas to invoke others lambdas as well so a solution here (I imagine something similar?) would be great? - Or a decent, easy to use guide of how to write these policy statements?
Many thanks for your help!
Found it!! In case anyone else struggles with this you need to add the following few lines to Resources:YourFunction:Properties in the template.yaml file:
Policies:
- S3CrudPolicy:
BucketName: "*"
The "*" will allow your lambda to talk to any bucket, you could switch for something specific if required. If you leave out 'BucketName' then it doesn't work and returns an error in CloudFormation syaing that S3CrudPolicy is invalid.
I have code which run in lambda but same is not work on my system.
asgName="test"
def lambda_handler(event, context):
client = boto3.client('autoscaling')
asgName="test"
response = client.describe_auto_scaling_groups(AutoScalingGroupNames=[asgName])
if not response['AutoScalingGroups']:
return 'No such ASG'
...
...
...
my below code i try to run in linux but prompt error "No such ASG"
asgName="test"
client = boto3.client('autoscaling')
response = client.describe_auto_scaling_groups(AutoScalingGroupNames=[asgName])
if not response['AutoScalingGroups']:
return 'No such ASG'
The first thing to check is that you are connecting to the correct AWS region. If not specified, it defaults to us-east-1 (N. Virginia). A region can also be specified in the credentials file.
In your code, you can specify the region with:
client = boto3.client('autoscaling', region_name = 'us-west-2')
The next thing to check is that the credentials are associated with the correct account. The AWS Lambda function is obviously running in your desired account, but you should confirm that the code running "in linux" is using the same AWS account.
You can do this by using the AWS Command-Line Interface (CLI), which will use the same credentials as your Python code on the Linux computer. Run:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names test
It should give the same result as the Python code running on that computer.
You might need to specify the region:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names test --region us-west-2
(Of course, change your region as appropriate.)
I've got this infrastructure description
variable "HEROKU_API_KEY" {}
provider "heroku" {
email = "sebastrident#gmail.com"
api_key = "${var.HEROKU_API_KEY}"
}
resource "heroku_app" "default" {
name = "judge-re"
region = "us"
}
Originally I forgot to specify buildpack. It created the application on heroku. I decided to add it to resource entry
buildpacks = [
"heroku/java"
]
But when I try to apply the plan in terraform I get this error
Error: Error applying plan:
1 error(s) occurred:
* heroku_app.default: 1 error(s) occurred:
* heroku_app.default: Post https://api.heroku.com/apps: Name is already taken
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Terraform plan looks like this
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ heroku_app.judge_re
id: <computed>
all_config_vars.%: <computed>
buildpacks.#: "1"
buildpacks.0: "heroku/java"
config_vars.#: <computed>
git_url: <computed>
heroku_hostname: <computed>
name: "judge-re"
region: "us"
stack: <computed>
web_url: <computed>
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
As a workaround I tried to add destroy in my deploy.sh script
terraform init
terraform plan
terraform destroy -force
terraform apply -auto-approve
But it does not destroy the resource as I get the message Destroy complete! Resources: 0 destroyed.
What is the problem?
Link to build
It looks like you also changed the name of the resource. Your original example has the resource name heroku_app.default while your plan has heroku_app.judge_re.
To point your state to the remote resource, so Terraform knows you are editing and not trying to recreate the resource, use terraform import:
terraform import heroku_app.judge_re judge-re
In terraform, normally you needn't destroy the whole stack, which you just want to re-build one or several resources in it.
terraform taint does this trick. The terraform taint command manually marks a Terraform-managed resource as tainted, forcing it to be destroyed and recreated on the next apply.
terraform taint heroku_app.default
Second, when you troubleshooting why the resource isn't list in destroy resource, please make sure you point to the right terraform tfstate file.
when you run terraform plan, did you see any resources which already was created?