AWS Lambda export Api Gatewat backup to S3 - aws-lambda

I am trying to configure an lambda function which will export Api backup to S3. But when i try to get an ordinary swagger backup through lambda using this script-
import boto3
client = boto3.client('apigateway')
def lambda_handler(event, context):
response = client.get_export(
restApiId='xtmeuujbycids',
stageName='test',
exportType='swagger',
parameters={
extensions: 'authorizers'
},
accepts='application/json'
)
I am getting this error-
[ERROR] NameError: name 'extensions' is not defined
Please help to resolve this issues.

Could you please check if the documentation has been explicitly published, and if it has been deployed to a stage before it available in the export.

The problem is in:
parameters={
extensions: 'authorizers'
}
You're passing a dictionary, which is ok, but the key should be a string. Since you don't have quotes around extensions, Python is trying to resolve it as a variable with the name extensions which doesn't exist in your code, and so it gives the NameError

Related

Terraform : How lambda to refer s3 in terraform resource `aws_lambda_function`

My lambda node source code is inside the s3 bucket as a zip file
I want that source to be uploaded while executing the aws_lambda_function
resource "aws_lambda_function" "terraform_lambda_func" {
s3_bucket = var.bucket_name
s3_key = "${var.zip_file_name}.zip"
function_name = var.lambdaFunctionName
role = aws_iam_role.okta-iam-v1.arn
handler = "index.handler"
runtime = "nodejs16.x"
}
Wanting it doesn't cut it because that's now the way the relantionship between a lambda and its code works.
What the aws_lambda_function resource does is is to say: "there is a lambda function and its code is there in that S3 bucket".
Because updating the file in the bucket doesn't automatically update the code that lambda, this resource doesn't have a way to reference new file content directly.
To do so, you need an aws_s3_object resource that is able to upload a new file to lambda.
To trigger the actual update of the lambda, you also need to pass the file hash to the aws_lambda_function. Since the aws_s3_object resource expors a source_hash property, you can link them as such.
See How to update aws_lambda_function Terraform resource when ZIP package is changed on S3?

No environment configuration found. DefaultAzureCredential()

I am trying to use this python sample to authenticate a client with an Azure Service
# pip install azure-identity
from azure.identity import DefaultAzureCredential
# pip install azure-mgmt-compute
from azure.mgmt.compute import ComputeManagementClient
# pip install azure-mgmt-network
from azure.mgmt.network import NetworkManagementClient
# pip install azure-mgmt-resource
from azure.mgmt.resource import ResourceManagementClient
SUBSCRIPTION_ID = creds_obj['SUBSCRIPTION_ID']
# Create client
# For other authentication approaches, please see: https://pypi.org/project/azure-identity/
resource_client = ResourceManagementClient(
credential=DefaultAzureCredential(),
subscription_id=SUBSCRIPTION_ID
)
network_client = NetworkManagementClient(
credential=DefaultAzureCredential(),
subscription_id=SUBSCRIPTION_ID
)
compute_client = ComputeManagementClient(
credential=DefaultAzureCredential(),
subscription_id=SUBSCRIPTION_ID
)
I keep getting No environment configuration found.
The code sample is directly from the microsoft github: https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/resources/azure-mgmt-resource/azure/mgmt/resource/resources/_resource_management_client.py. Ideally I would like to manage this configuration using environment variables or a config file. Is there any way to do this?
When using Azure Identity client library for Python, DefaultAzureCredential attempts to authenticate via the following mechanisms in this order, stopping when one succeeds:
You could set Environment Variables to fix it.
from azure.identity import DefaultAzureCredential
credential=DefaultAzureCredential()
Or set the properties in config and use ClientSecretCredential to create credential.
from azure.identity import ClientSecretCredential
subscription_id = creds_obj["AZURE_SUBSCRIPTION_ID"]
tenant_id = creds_obj["AZURE_TENANT_ID"]
client_id = creds_obj["AZURE_CLIENT_ID"]
client_secret = creds_obj["AZURE_CLIENT_SECRET"]
credential = ClientSecretCredential(tenant_id=tenant_id, client_id=client_id, client_secret=client_secret)
I was having somewhat similar trouble following this Azure key vault tutorial which brought me here.
The solution I found was overriding the default values in the DefaultAzureCredential() constructor.
https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python
For reasons people far smarter than me will be able to explain I found that even though I had credentials from the azure cli it was not using those and instead looking for environment_credentials, which I did not have. So it threw an exception.
Once I set the exclude_environment_credential argument to True it then looked instead for the managed_identity_credentials, again which I did not have.
Eventually when I force excluded all credentials other than those from the cli it worked ok for me
I hope this helps someone. Those with more experience please feel free to edit as you see fit

DevOps : AWS Lambda .zip with Terraform

I have written Terraform to create a Lambda function in AWS.
This includes specifying my python code zipped.
Running from Command Line into my tech box, all goes well.
The terraform apply action sees my zip moved into AWS and used to create the lambda.
Key section of code :
resource "aws_lambda_function" "meta_lambda" {
filename = "get_resources.zip"
source_code_hash = filebase64sha256("get_resources.zip")
.....
Now, to get this into other environments, I have to push my Terraform via Azure DevOps.
However, when I try to build in DevOps, I get the following :
Error: Error in function call on main.tf line 140, in resource
"aws_lambda_function" "meta_lambda": 140: source_code_hash =
filebase64sha256("get_resources.zip") Call to function
"filebase64sha256" failed: no file exists at get_resources.zip.
I have a feeling that I am missing a key concept here as I can see the .zip in the repo - so do not understand how it is not found by the build?
Any hints/clues as to what I am doing wrong, gratefully welcome.
Chaps, I'm afraid that I may have just been over my head here - new to terraform & DevOps !
I had a word with our (more) tech folks and they have sorted this.
The reason I think yours if failing is becuase the Tar Terraform step
needs to use a different command line so it gets the zip file included
into the artifacts. tar -cvpf terraform.tar .terraform .tf tfplan
tar --recursion -cvpf terraform.tar --exclude='/.git' --exclude='.gitignore' .
.. it that means anything to you !
Whatever they did, it works !
As there is a bounty on this, I still going to assign it as I am grateful for the input !
Sorry if this was a it of a newbie error.
You can try building your package with terraform AWS lambda build module. As it has been very useful for the processTerraform Lambda build module
According to the document example, in the source_code_hash argument, filebase64sha256 ("get_resources.zip") needs to be enclosed in double quotes.
You can refer to this document for details.

How to add boto library to an python-based AWS lambda function?

I want to create a lambda function in python3.7 that it will use boto to perform some AWS query.
The function is very simple. I added import boto to the simple vanilla template to try out how to enable boto.
import json
import boto
def lambda_handler(event, context):
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Needless to say, it fails:
Response:
{
"errorMessage": "Unable to import module 'lambda_function': No module named 'boto'",
"errorType": "Runtime.ImportModuleError"
}
So how can I add boto to my code?
I have checked out Layers and it is empty.
I think I can create on by uploading a zip file. But what should I put inside the zip file? What sort of directory structure is Lambda expecting?
boto has been deprecated. You should be using boto3.
Import boto3
This is like adding additional dependencies to aws lambda.
Please follow document to add boto package.

Can't create bucket using aws-sdk ruby gem. Aws::S3::Errors::SignatureDoesNotMatch

I have a new computer and I'm trying to set up my AWS CLI environment so that I can run a management console I've created.
This is the code I'm running:
def create_bucket(bucket_args)
AWS_S3 = Aws::S3::Client.new(signature_version: 'v4')
AWS_S3.create_bucket(bucket_args)
end
Which raises this error:
Aws::S3::Errors::SignatureDoesNotMatch - The request signature we calculated does not match the signature you provided. Check your key and signing method.:
This was working properly on my other computer, which I no longer have access to. I remember debugging this same error on the other computer, and I thought I had resolved it by adding signature_version = s3v4 to my ~/.aws/config file. But this fix is not working on my new computer, and I'm not sure why.
To give some more context: I am using aws-sdk (2.5.5) and these aws cli specs: aws-cli/1.11.2 Python/2.7.12 Linux/4.4.0-38-generic botocore/1.4.60
In this case the issue was that my aws credentials (in ~/.aws/credentials) - specifically my secret token - were invalid.
The original had a slash in it:
xx/xxxxxxxxxxxxxxxxxxxxxxxxxx
which I didn't notice at first, so when I double clicked the token to select the word, it didn't include the first three characters. I then pasted this into the terminal when running aws configure.
To fix this, I found the correct, original secret acceess token and set the correct value in ~/.aws/credentials.

Resources