There are a lot of resources out there for using pytest, Moto, and botocore Stubber to write unit tests.
EDIT. I am rephrasing this question after further investigation:
I have a lambda_function python script that I want to test with pytest and the Boto Stubber. Inside of lambda_function I import a ssm_client from another python files (ssm_clt = boto3.client('ssm', region_name=region))
The problem is when I setup the pytest like this:
def test_lambda_handler(ssm_stubber):
ssm_stubber.activate()
ssm_stubber.add_response(
'get_parameters_by_path',
expected_params={'Path': 'my/ssm/parameter', 'Recursive': 'True'},
service_response={
'Parameters': [
{
'Name': 'my/ssm/parameter',
'Type': 'String',
'Value': 'my_string returned',
},
],
},
)
ssm_stubber.deactivate()
ssm_stubber.assert_no_pending_responses()
with the ssm_stubber defined as a pytest fixture:
#pytest.fixture(autouse=True)
def ssm_stubber():
with Stubber(clients.ssm_clt) as stubber:
yield stubber
It uses the actual boto3 client and not the stubber one because I have an import statement in lambda_function. I'm struggling with how to get past this. I'd like to not put a bunch of code in the regular lambda_function that is only for testing.
It is almost like I need a conditional import, but to my knowledge this is bad practice.
Did I structure my project in a way that makes it almost impossible to use stubber with pytest in this way?
So I ended up just using the monkeypatch functionality of pytest. This was a lot simpler then trying to patch the Boto 3 client and get it to properly stub. Below is some example code of what I did.
here is the function I want to test. The problem was the AWS API call from within the param_dictionary = setup.get_ssm_parameters() function never got stubbed correctly. This was not being stubbed because it was outside of the function the test was testing. This resulted in it trying to use the real boto3 client call during testing. All other API calls within the lambda_handler were always stubbed correctly.
"""lambda_function.py"""
import another_function
# this was the function that has an SSM AWS client call in it that wasn't get properly stubbed because it is outside of the function I am testing but is still executed as part of the script
param_dictionary = another_function.get_ssm_parameters()
def lambda_handler(event, context):
# other code here that runs fine and AWS API calls that are properly stubbed
This is the file that contained the AWS API call to parameter store.
"""another_function.py"""
import boto3
ssm_clt = boto3.client('ssm', region_name='us-west-2')
def get_ssm_parameters()
param_dict = {}
ssm_resp = ssm_clt.get_parameters_by_path(
Path=f'/{os.environ["teamName"]}/{os.environ["environment"]}/etl',
Recursive=True
)
for parameter in ssm_resp["Parameters"]:
param_dict.update(json.loads(parameter["Value"]))
return param_dict
This is my test. You can see I pass in the money patch pytest.fixture which will patch the response from the function get_ssm_parameters() so it does not make an API call.
"""test_lambda_function.py"""
def test_my_func_one(return_param_dict):
from lambda_function import lambda_handler
# insert other snubbers and "add_response" code here for AWS API calls that occur inside of the lambda_handler
lambda_handler('my_event', None)
This is the config file for pytest where I setup the monkeypatching. I use the setattr functionality of monkeypatch to override the return of get_ssm_parameters(). This return is defined in the function param_dict()
"""conftest.py"""
import pytest
import another function
def param_dict():
param_dict = {"my_key": "my_value"}
return param_dict
#pytest.fixture(autouse=True)
def return_param_dict(monkeypatch):
monkeypatch.setattr(another_function, "get_ssm_parameters", param_dict)
Ultimately this was a lot simpler to do than trying to patch a client in another module outside of the function I was testing.
Related
I am in a little bit of a pickle.
I have multiple decorators wrapping my view functions. I want to test that view function using pytest and this means the decorators will also be executed. Now, in some of those decorators, I am making API calls to an external service and I do not want to make those API calls while running my test, what I am doing instead is to mock the response from those decorators. When I ran the test I got AttributeError: 'dict' object has no attribute '__name__' and pytest is pointing to the decorators.py file in the djangorestframework package as the source of the error. Any idea what I am doing wrong?
Views.py file
#api_view(['POST'])
#DecoratorClass.decorator_one
#DecoratorClass.decorator_two
#DecoratorClass.decorator_three
#DecoratorClass.decorator_four
#DecoratorClass.decorator_five
#DecoratorClass.decorator_six
#DecoratorClass.decorator_seven
def my_view_fun(request):
my_data = TenantService.create_tenant(request)
return ResponseManager.handle_response(message="sucessful", data=my_data.data, status=201)
This works perfectly with manual testing, I only get this problem when I am running the test with pytest.
I am making the external API calls in decorators three, four and five.
TL;DR:
How can I handle the decorators wrapped around a view function when testing that view function in a situation where some of those decorators are making external API calls which should ideally be mocked in a test.
I'm confused about factories.
#pytest.fixture
def a_api_request_factory():
return APIRequestFactory()
class TestUserProfileDetailView(TestCase):
def test_create_userprofile(self, up=a_user_profile, rf=a_api_request_factory):
"""creates an APIRequest and uses an instance of UserProfile from a_user_profile to test a view user_detail_view"""
request = rf().get('/api/userprofile/') # the problem line
request.user = up.user
response = userprofile_detail_view(request)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data['user'], up.user.username)
if I take out the parens from rf().get.... then I get
"function doesn't have a get attribute".
If I call it directly then it gives me:
"Fixture "a_api_request_factory" called directly. Fixtures are not
meant to be called directly, but are created automatically when test
functions request them as parameters. See
https://docs.pytest.org/en/stable/fixture.html for more information
about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly
about how to update your code."
I do believe I've hit every combination of with or without parens in all relevant locations. Where do the parens go for fixtures?
Or better yet is there a pattern to avoid this type of confusion completely?
I'm trying to call gdal2tiles.py in AWS Lambda function using GeoLambda layer.
I can't figure out how to call this script form the lambda function.
My lambda function looks like this so far:
import json
import os
from osgeo import gdal
def lambda_handler(event, context):
os.system("gdal2tiles.py -p -z [0-6] test.jpg")
In the log I have this error: sh: gdal2tiles.py: command not found
Any idea how to solve this? Thank you.
one way to do it is to import gdal2tiles utilities from the GeoLambda layer that you added to your lambda function.
For example:
gdal2tiles.generate_tiles('/path/to/input_file', '/path/to/output_dir/'), nb_processes=2, zoom='0-6')
Read more about in gdal2tiles
Edit:
Ok i made it to work with these set of layer attached to the lambda.
The first 2 layers were straight from the Github
arn:aws:lambda:us-east-1:552188055668:layer:geolambda-python:3
arn:aws:lambda:us-east-1:552188055668:layer:geolambda:4
The 3rd layer is our gdal2tiles which is created locally and attached to lambda fucntion
arn:aws:lambda:us-east-1:246990787935:layer:gdaltiles:1
you can download the zip from here
And i hope you added the below Environment vairable to your lambda function configuration
GDAL_DATA=/opt/share/gdal
PROJ_LIB=/opt/share/proj (only needed for GeoLambda 2.0.0+)
I have a lambda function within AWS based off the clear-lambda-storage application at . There is code that is as follows:
from argparse import Namespace
from clear_lambda_storage import remove_old_lambda_versions
def clear_lambda_storage(event, context):
remove_old_lambda_versions(Namespace(token_key_id=None, token_secret=None, regions=None, profile=None, num_to_keep=3, function_names=["insertName"]))
return "Successful clean! 🗑 ✅"
With the function_names argument I want to have a list of names of all the lambda functions in the account - is there any way I can do this besides manually hardcoding them (so that if a new lambda function is added, the list is updated).
use the SDK. IN ptyhon, this is boto3, and so one of these commands, probably https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.list_functions
I have a bottle.py app that should load some data, parts of which get served depending on specific routes. (This is similar to memcached in principle, except the data isn't that big and I don't want the extra complexity.) I can load the data into global variables which are accessible from each function I write, but this seems less clean. Is there any way to load some data into a Bottle() instance during initialization?
You can do it by using bottle.default_app
Here's simple example.
main.py (used sample code from http://bottlepy.org/docs/dev/)
import bottle
from bottle import route, run, template
app = bottle.default_app()
app.myvar = "Hello there!" # add new variable to app
#app.route('/hello/<name>')
def index(name='World'):
return template('<b>Hello {{name}}</b>!', name=name)
run(app, host='localhost', port=8080)
some_handler.py
import bottle
def show_var_from_app():
var_from_app = bottle.default_app().myvar
return var_from_app