Calling gdal2tiles.py in AWS LAMBDA function - aws-lambda

I'm trying to call gdal2tiles.py in AWS Lambda function using GeoLambda layer.
I can't figure out how to call this script form the lambda function.
My lambda function looks like this so far:
import json
import os
from osgeo import gdal
def lambda_handler(event, context):
os.system("gdal2tiles.py -p -z [0-6] test.jpg")
In the log I have this error: sh: gdal2tiles.py: command not found
Any idea how to solve this? Thank you.

one way to do it is to import gdal2tiles utilities from the GeoLambda layer that you added to your lambda function.
For example:
gdal2tiles.generate_tiles('/path/to/input_file', '/path/to/output_dir/'), nb_processes=2, zoom='0-6')
Read more about in gdal2tiles
Edit:
Ok i made it to work with these set of layer attached to the lambda.
The first 2 layers were straight from the Github
arn:aws:lambda:us-east-1:552188055668:layer:geolambda-python:3
arn:aws:lambda:us-east-1:552188055668:layer:geolambda:4
The 3rd layer is our gdal2tiles which is created locally and attached to lambda fucntion
arn:aws:lambda:us-east-1:246990787935:layer:gdaltiles:1
you can download the zip from here
And i hope you added the below Environment vairable to your lambda function configuration
GDAL_DATA=/opt/share/gdal
PROJ_LIB=/opt/share/proj (only needed for GeoLambda 2.0.0+)

Related

WrappedAPIView.__name__ = func.__name__ . AttributeError: 'dict' object has no attribute '__name__'

I am in a little bit of a pickle.
I have multiple decorators wrapping my view functions. I want to test that view function using pytest and this means the decorators will also be executed. Now, in some of those decorators, I am making API calls to an external service and I do not want to make those API calls while running my test, what I am doing instead is to mock the response from those decorators. When I ran the test I got AttributeError: 'dict' object has no attribute '__name__' and pytest is pointing to the decorators.py file in the djangorestframework package as the source of the error. Any idea what I am doing wrong?
Views.py file
#api_view(['POST'])
#DecoratorClass.decorator_one
#DecoratorClass.decorator_two
#DecoratorClass.decorator_three
#DecoratorClass.decorator_four
#DecoratorClass.decorator_five
#DecoratorClass.decorator_six
#DecoratorClass.decorator_seven
def my_view_fun(request):
my_data = TenantService.create_tenant(request)
return ResponseManager.handle_response(message="sucessful", data=my_data.data, status=201)
This works perfectly with manual testing, I only get this problem when I am running the test with pytest.
I am making the external API calls in decorators three, four and five.
TL;DR:
How can I handle the decorators wrapped around a view function when testing that view function in a situation where some of those decorators are making external API calls which should ideally be mocked in a test.

getting list of aws lambda functions from within the code of a lambda function

I have a lambda function within AWS based off the clear-lambda-storage application at . There is code that is as follows:
from argparse import Namespace
from clear_lambda_storage import remove_old_lambda_versions
def clear_lambda_storage(event, context):
remove_old_lambda_versions(Namespace(token_key_id=None, token_secret=None, regions=None, profile=None, num_to_keep=3, function_names=["insertName"]))
return "Successful clean! 🗑 ✅"
With the function_names argument I want to have a list of names of all the lambda functions in the account - is there any way I can do this besides manually hardcoding them (so that if a new lambda function is added, the list is updated).
use the SDK. IN ptyhon, this is boto3, and so one of these commands, probably https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.list_functions

Overriding boto3 client with stubbed client

There are a lot of resources out there for using pytest, Moto, and botocore Stubber to write unit tests.
EDIT. I am rephrasing this question after further investigation:
I have a lambda_function python script that I want to test with pytest and the Boto Stubber. Inside of lambda_function I import a ssm_client from another python files (ssm_clt = boto3.client('ssm', region_name=region))
The problem is when I setup the pytest like this:
def test_lambda_handler(ssm_stubber):
ssm_stubber.activate()
ssm_stubber.add_response(
'get_parameters_by_path',
expected_params={'Path': 'my/ssm/parameter', 'Recursive': 'True'},
service_response={
'Parameters': [
{
'Name': 'my/ssm/parameter',
'Type': 'String',
'Value': 'my_string returned',
},
],
},
)
ssm_stubber.deactivate()
ssm_stubber.assert_no_pending_responses()
with the ssm_stubber defined as a pytest fixture:
#pytest.fixture(autouse=True)
def ssm_stubber():
with Stubber(clients.ssm_clt) as stubber:
yield stubber
It uses the actual boto3 client and not the stubber one because I have an import statement in lambda_function. I'm struggling with how to get past this. I'd like to not put a bunch of code in the regular lambda_function that is only for testing.
It is almost like I need a conditional import, but to my knowledge this is bad practice.
Did I structure my project in a way that makes it almost impossible to use stubber with pytest in this way?
So I ended up just using the monkeypatch functionality of pytest. This was a lot simpler then trying to patch the Boto 3 client and get it to properly stub. Below is some example code of what I did.
here is the function I want to test. The problem was the AWS API call from within the param_dictionary = setup.get_ssm_parameters() function never got stubbed correctly. This was not being stubbed because it was outside of the function the test was testing. This resulted in it trying to use the real boto3 client call during testing. All other API calls within the lambda_handler were always stubbed correctly.
"""lambda_function.py"""
import another_function
# this was the function that has an SSM AWS client call in it that wasn't get properly stubbed because it is outside of the function I am testing but is still executed as part of the script
param_dictionary = another_function.get_ssm_parameters()
def lambda_handler(event, context):
# other code here that runs fine and AWS API calls that are properly stubbed
This is the file that contained the AWS API call to parameter store.
"""another_function.py"""
import boto3
ssm_clt = boto3.client('ssm', region_name='us-west-2')
def get_ssm_parameters()
param_dict = {}
ssm_resp = ssm_clt.get_parameters_by_path(
Path=f'/{os.environ["teamName"]}/{os.environ["environment"]}/etl',
Recursive=True
)
for parameter in ssm_resp["Parameters"]:
param_dict.update(json.loads(parameter["Value"]))
return param_dict
This is my test. You can see I pass in the money patch pytest.fixture which will patch the response from the function get_ssm_parameters() so it does not make an API call.
"""test_lambda_function.py"""
def test_my_func_one(return_param_dict):
from lambda_function import lambda_handler
# insert other snubbers and "add_response" code here for AWS API calls that occur inside of the lambda_handler
lambda_handler('my_event', None)
This is the config file for pytest where I setup the monkeypatching. I use the setattr functionality of monkeypatch to override the return of get_ssm_parameters(). This return is defined in the function param_dict()
"""conftest.py"""
import pytest
import another function
def param_dict():
param_dict = {"my_key": "my_value"}
return param_dict
#pytest.fixture(autouse=True)
def return_param_dict(monkeypatch):
monkeypatch.setattr(another_function, "get_ssm_parameters", param_dict)
Ultimately this was a lot simpler to do than trying to patch a client in another module outside of the function I was testing.

Use Dash with websockets

What is the best way to use Dash with Websockets to build a real-time dashboard ? I would like to update a graph everytime a message is received but the only thing I've found is calling the callback every x seconds like the example below.
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_daq as daq
from dash.dependencies import Input, Output
import plotly
import plotly.graph_objs as go
from websocket import create_connection
from tinydb import TinyDB, Query
import json
import ssl
# Setting up the websocket and the necessary web handles
ws = create_connection(address, sslopt={"cert_reqs": ssl.CERT_NONE})
app = dash.Dash(__name__)
app.layout = html.Div(
[
dcc.Graph(id='live-graph', animate=True),
dcc.Interval(
id='graph-update',
interval=1*1000,
n_intervals=0)
]
)
#app.callback(Output('live-graph', 'figure'),
[Input('graph-update', 'n_intervals')])
def update_graph_live(n):
message = ws.recv()
x=message.get('data1')
y=message.get('data2')
.....
fig = go.Figure(
data = [go.Bar(x=x,y=y)],
layout=go.Layout(
title=go.layout.Title(text="Bar Chart")
)
)
)
return fig
if __name__ == '__main__':
app.run_server(debug=True)
Is there a way to trigger the callback everytime a message is received (maybe storing them in a database before) ?
This forum post describes a method to use websocket callbacks with Dash:
https://community.plot.ly/t/triggering-callback-from-within-python/23321/6
Update
Tried it, it works well. Environment is Windows 10 x64 + Python 3.7.
To test, download the .tar.gz file and run python usage.py. It will complain about some missing packages, install these. Might have to edit the address from 0.0.0.0 to 127.0.0.1 in usage.py. Browse to http://127.0.0.1:5000 to see the results. If I had more time, I'd put this example up on GitHub (ping me if you're having trouble getting it to work, or the original gets lost).
I had two separate servers: one for dash, the other one as a socket server. They are running on different ports. On receiving a message, I edited a common json file to share information to dash's callback. That's how I did it.

How do I set an alarm to terminate an EC2 instance using boto?

I have been unable to find a simple example which shows me how to use boto to terminate an Amazon EC2 instance using an alarm (without using AutoScaling). I want to terminate the specific instance that has a CPU usage less than 1% for 10 minutes.
Here is what I've tried so far:
import boto.ec2
import boto.ec2.cloudwatch
from boto.ec2.cloudwatch import MetricAlarm
conn = boto.ec2.connect_to_region("us-east-1", aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
cw = boto.ec2.cloudwatch.connect_to_region("us-east-1", aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
reservations = conn.get_all_instances()
for r in reservations:
for inst in r.instances:
alarm = boto.ec2.cloudwatch.MetricAlarm(name='TestAlarm', description='This is a test alarm.', namespace='AWS/EC2', metric='CPUUtilization', statistic='Average', comparison='<=', threshold=1, period=300, evaluation_periods=2, dimensions={'InstanceId':[inst.id]}, alarm_actions=['arn:aws:automate:us-east-1:ec2:terminate'])
cw.put_metric_alarm(alarm)
Unfortunately it gives me this error:
dimensions={'InstanceId':[inst.id]}, alarm_actions=['arn:aws:automate:us-east-1:ec2:terminate'])
TypeError: init() got an unexpected keyword argument 'alarm_actions'
I'm sure it's something simple I'm missing.
Also, I am not using CloudFormation, so I cannot use the AutoScaling feature. This is because I don't want the alarm to use a metric across the entire group, rather only for a specific instance, and only terminate that specific instance (not any instance in that group).
Thanks in advance for your help!
The alarm actions are not passed through dimensions but rather added as an attribute to the MetricAlarm object that you are using. In your code you need to do the following:
alarm = boto.ec2.cloudwatch.MetricAlarm(name='TestAlarm', description='This is a test alarm.', namespace='AWS/EC2', metric='CPUUtilization', statistic='Average', comparison='<=', threshold=1, period=300, evaluation_periods=2, dimensions={'InstanceId':[inst.id]})
alarm.add_alarm_action('arn:aws:automate:us-east-1:ec2:terminate')
cw.put_metric_alarm(alarm)
You can also see in the boto documentation here:
http://docs.pythonboto.org/en/latest/ref/cloudwatch.html#module-boto.ec2.cloudwatch.alarm

Resources