I want to create a lambda function in python3.7 that it will use boto to perform some AWS query.
The function is very simple. I added import boto to the simple vanilla template to try out how to enable boto.
import json
import boto
def lambda_handler(event, context):
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Needless to say, it fails:
Response:
{
"errorMessage": "Unable to import module 'lambda_function': No module named 'boto'",
"errorType": "Runtime.ImportModuleError"
}
So how can I add boto to my code?
I have checked out Layers and it is empty.
I think I can create on by uploading a zip file. But what should I put inside the zip file? What sort of directory structure is Lambda expecting?
boto has been deprecated. You should be using boto3.
Import boto3
This is like adding additional dependencies to aws lambda.
Please follow document to add boto package.
Related
I am trying to configure an lambda function which will export Api backup to S3. But when i try to get an ordinary swagger backup through lambda using this script-
import boto3
client = boto3.client('apigateway')
def lambda_handler(event, context):
response = client.get_export(
restApiId='xtmeuujbycids',
stageName='test',
exportType='swagger',
parameters={
extensions: 'authorizers'
},
accepts='application/json'
)
I am getting this error-
[ERROR] NameError: name 'extensions' is not defined
Please help to resolve this issues.
Could you please check if the documentation has been explicitly published, and if it has been deployed to a stage before it available in the export.
The problem is in:
parameters={
extensions: 'authorizers'
}
You're passing a dictionary, which is ok, but the key should be a string. Since you don't have quotes around extensions, Python is trying to resolve it as a variable with the name extensions which doesn't exist in your code, and so it gives the NameError
Troposphere module version: 2.6.2.
Python script:
import troposphere.ecs as ecs
from ecs import *
...
template.add_resource(ecs.Cluster(
"Cluster",
CapacityProviders=["FARGATE", "FARGATE_SPOT"]
))
...
Why am i getting this error when running the script that creates the template?
AWS::ECS::Cluster object does not support attribute CapacityProviders
I believe it is not supported in 2.6.2. Update to 2.6.3 should solve your problem
I am trying to create a grpc service with a very basic single action which is GetDeployment, takes a namespace and a name as an input, and returns a Kubernetes deployment. The thing is that I do not want to define my own message for the Deployment as it already exists on the official Kubernetes repository.
I am pretty new to grpc and probably do not understand well enough how it works but can I import this message to my own file in a way I could then write the following .proto file ?
syntax = "proto3";
package api;
import "google/api/annotations.proto";
import "k8s.io/kubernetes/pkg/api/v1/generated.proto";
message GetDeploymentOptions {
string namespace = 1;
string name = 2;
}
service AppsV1 {
rpc GetDeployment(GetDeploymentOptions) returns (k8s.io.kubernetes.pkg.api.v1.Deployment) {}
}
Thank you in advance
GRPC codegen is just a protoc plugin. It generates code for service and rpc but it follows the normal protobuf rules for imports.
In your example, if your file is in src/api.proto and the k8s api repo is a git submodule checked out into thirdparty/k8s.io/api folder you would generate the files you'd need by running:
root>protoc.exe -I thirdparty k8s.io/api/core/v1/generated.proto --go_out=go
root>protoc.exe -I thirdparty src/api.proto --go_out=plugins=grpc:go
The first command is generating the .pb.go file which contains the k8s messages, while the second command is generating the .pb.go file which contains your messages and your service.
Looking at the transient imports of that file, you may also need to checkout api-machinery into k8s.io/apimachinery and run protoc on that file as well.
I want to use dlib on AWS Lambda.
I use serverless framework(runtime is python3.6). I import dlib package using serverless-python-requirements plugins.
It works very well at local $ serverless invoke local -f function. But, when I deploy it and use inovek $ serverless invoke -f function, It makes errors.
serverless.yml's code
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
requirements.txt
boto3==1.9.135
botocore==1.12.135
Pillow==6.0.0
dlib==19.17.0
docutils==0.14
imutils==0.5.2
jmespath==0.9.4
numpy==1.16.3
opencv-python==4.1.0.25
python-dateutil==2.8.0
s3transfer==0.2.0
six==1.12.0
urllib3==1.24.2
error log of lamda aws
Unable to import module 'handler': libpng16.so.16: cannot open shared object file: No such file or directory
Could you tell me the way to use dlib on aws lambda...
On the old boto library is was simple enough to use the proxy, proxy_port, proxy_user and proxy_pass parameters when you open a connection. However, I could not find any equivalent way of programmatically define the proxy parameters on boto3. :(
As of at least version 1.5.79, botocore accepts a proxies argument in the botocore config.
e.g.
import boto3
from botocore.config import Config
boto3.resource('s3', config=Config(proxies={'https': 'foo.bar:3128'}))
boto3 resource
https://boto3.readthedocs.io/en/latest/reference/core/session.html#boto3.session.Session.resource
botocore config
https://botocore.readthedocs.io/en/stable/reference/config.html#botocore.config.Config
If you user proxy server does not have a password
try the following:
import os
os.environ["HTTP_PROXY"] = "http://proxy.com:port"
os.environ["HTTPS_PROXY"] = "https://proxy.com:port"
if you user proxy server has a password
try the following:
import os
os.environ["HTTP_PROXY"] = "http://user:password#proxy.com:port"
os.environ["HTTPS_PROXY"] = "https://user:password#proxy.com:port"
Apart from altering the environment variable, I'll present what I found in the code.
Since boto3 uses botocore, I had a look through the source code:
https://github.com/boto/botocore/blob/66008c874ebfa9ee7530d944d274480347ac3432/botocore/endpoint.py#L265
From this link, we end up at:
def _get_proxies(self, url):
# We could also support getting proxies from a config file,
# but for now proxy support is taken from the environment.
return get_environ_proxies(url)
...which is called by proxies = self._get_proxies(final_endpoint_url) in the EndpointCreator class.
Long story short, if you're using python2 it will use the getproxies method from urllib2 and if you're using python3, it will use urllib3.
get_environ_proxies is expecting a dict containing {'http:' 'url'} (and I'm guessing https too).
You could always patch the code, but that is poor practice.
This is one of the rare occasions when I would recommend monkey-patching, at least until the Boto developers allow connection-specific proxy settings:
import botocore.endpoint
def _get_proxies(self, url):
return {'http': 'http://someproxy:1234/', 'https': 'https://someproxy:1234/'}
botocore.endpoint.EndpointCreator._get_proxies = _get_proxies
import boto3