Are Lambda Layers supported for Lambdas deployed to Greengrass? - aws-lambda

I am trying to deploy a binary utility along with my Python-based lambda to a Greengrass group.
Using a layer seemed like a natural way to accomplish this. No errors are encountered during the deployment, but I cannot find any evidence that the layer exists on the edge device (Ubuntu 18.04).
I have also been unable to find any documentation that confirms or denies layers work with Greengrass.
To move forward, I am bundling the utility within the Lambda deployment package itself. This seems workable, but a layer seems more flexible...
Any clarification would be appreciated.

Just from my experience, no, Layers do not get pushed to the Greengrass host, so if your lambda function depends on libraries in a layer, it will fail to run, even though deployments will go through without issue.
AWS I believe is no longer updating Greengrass V1, as they're focused on Greengrass V2. So I doubt this issue will get addressed. As a workaround, package everything within the lambda function and don't use layers for those functions getting pushed to Greengrass.

Related

Not able to use additional dependencies in AWS lambda function for python

I am building a lambda function to be deployed in a Greengrass core device which has additional dependencies such as NumPy etc. I have followed the instructions provided in official documentation to do so but not able to do it.
I have created a virtual environment, installed all of the dependencies, and compressed all the lib files and directories along with the main code which contains the function handler.
Can anyone help me out regarding this issue?
You should make sure that the hierarchy of your deployed package is correct.
The python dependencies should be in the highest hierarchy level.
My strongest suggestion is to use a framework to deploy your lambdas.
We're using the serverless framework, which has many benefits. In particular, it creates this hierarchy "behind the scene":
https://www.serverless.com/blog/serverless-python-packaging
Disclosure: I'm working for Lumigo, a 3rd party company that helps you to debug your serverless architecture.

Supporting multiple versions of Kuberentes APIs in Go program

Kubernetes has a rapidly evolving API and I am trying to find best practices, recommendations, or really any kind of guidance about how to write Go software that gracefully handles supporting its evolving API and supports multiple versions simultaneously. I am sure I am not the first person to attempt this, but so far I have not found any guidance about Kubernetes specifically, and what I have read about polymorphism in Go has not inspired a great solution yet.
Kubernetes is written in Go and provides Go packages like k8s.io/api/extensions/v1beta1 and k8s.io/api/networking/v1beta1. Kubernetes resources, for example Ingress, are first released in one API group (extensions) and as they become more mature, get moved to another API group (networking) and can also change versions (e.g. go from v1beta1 to plain v1). Kubernetes also provides k8s.io/client-go for interacting with a Kubernetes cluster.
I am an experienced object-oriented (and other types of) programmer, but fairly new to Go and completely new to the Kubernetes packages. What I want to accomplish is a program architecture that allows me to write code once and have it work on any version of the Kubernetes resource, at least as long as the resource contains all the features I care about. In a typical object-oriented environment, I would create a base Ingress class and have all these various versions derive from it, and package up operations so that I could just work on Ingress everywhere. My sense is that Go intends for people to take a different approach, and in any case there are complications because of the client/server aspect.
Client/server and APIs
My Go program is a client of the Kubernetes server. Various version of the server will support various version of the Kubernetes API, and therefor various versions of the Ingress resource. So my first problem is that I have to do something like this to get a list of all the Ingresses:
ingressesExt, err := il.kubeClient.ExtensionsV1beta1().Ingresses(namespace).List(metav1.ListOptions{})
ingressesNet, err := il.kubeClient.NetworkingV1beta1().Ingresses(namespace).List(metav1.ListOptions{})
I have to gracefully handle errors about the API not being supported. Because the return types are different, AFAIK there is no unified interface where I can just make one call and get the results in a single list. It seems like this is the sort of thing someone should have solved and provided a solution for, but so far I have not found anything.
Type conversion
I also have to find some way to merge ingressesExt and ingressesNet into a single usable list, with an eye toward maintainability/extensibility now that Ingress has graduated to NetworkingV1.
Kubernetes utilities
I see that Kubernetes provides a lot of auto-generated code and utilities, but I have not found a lot of documentation about how to use them. For example, Ingress has functions like
DeepCopy
Marshal
XXX_DiscardUnknown
XXX_Merge
XXX_Unmarshal
Maybe I can use these to do the type conversion? Combine marshal, unmarshall, discard, and merge somehow to take the data from on version and import it into another?
Questions
Hopefully you see the issue and understand what I am trying to achieve.
Are there packages from Kubernetes or other open source authors that make some progress in unifying the APIs like I need?
Are any of the Kubernetes auto-generated functions meant for general use (as opposed to internal use) and helpful to my challenge? I have not found documentation for any but DeepCopy.
What is the "Go way" of abstracting out the differences between the various versions of the Ingress object such that I can write the rest of the code to work on any version? Keep in mind that I may need to make another API call for further processing, in which case I would need to know the concrete type of the object and select the right API call. It is not obvious to me that client-go provides any support for such auto-selection of API calls.

AWS Lambda vs EC2 REST API

I am not an expert in AWS but have some experience. Got a situation where Angular UI (host on EC2) would have to talk to RDS DB instance. All set so far in the stack except API(middle ware). We are thinking of using Lambda (as our traffic is unknown at this time). Again here we have lot of choices to make on programming side like C# or Python or Node. (we are tilting towards C# or Python based on the some research done and skills also Python good at having great cold start and C# .NET core being stable in terms of performance).
Since we are with Lambda offcourse we should go in the route of API GATEWAY. all set but now, can all the business logic of our application can reside in Lambda? if so wouldnt it Lambda becomes huge and take performance hit(more memory, more computational resources thus higher costs?)? then we thought of lets have Lambda out there to take light weight processing and heavy lifting can be moved to .NET API that host on EC2?
Not sure of we are seeing any issues in this approach? Also have to mention that, Lambda have to call RDS for CRUD operations then should I think to much about concurrency issues? as it might fall into state full category?
The advantage with AWS Lambda here is scaling. As you know already , cuz Lambda is fully managed so we can take advantages of it for this case.
If you host API on EC2, so we don't have "scaling" part in place. And of course, you can start using ECS, Auto Scaling Group ... but it is bring you to another route.
Are you building this application for learning or for production?

Using VS Code debugger for serverless lambda flask applications

I have been creating some Lambda functions on AWS using the serverless framework, Flask and SLS WSGI. Some dynamodb tables but that should not matter in this case.
The problem that I am facing is that I can not debug the whole thing end to end, I am able to run sls wsgi serve and run a local instance of my lambda functions, happy days. However, I am a little bit spoiled by other dev tools, languages and IDEs (even just Flask itself) that allow me to set breakpoints and see the scope, step through etc. So I would really like to be able to get that done here as well.
I tried launching the sls command mentioned above in a launch configuration inside vs code, no luck. Next thing I tried was to run the default flask launch config but that obviously didn't include all the configuration stored in the sls.yml file which is essential for accessing the local dynamodb instance.
The last thing I tried was to attach to ptvsd at the end of my app.py file. So I would hit a wait action from ptvsd, attach the debugger in vs code to the specified port, which seems to be successful and returning the code execution. However, it seems like sls wsgi runs the file twice, so that the attaching happens for the first instance and not the second, which then does not trigger a breakpoint when I try to execute an API call through Postman.
I guess I could include the wait step everywhere manually, then attach for each method that I am trying to debug inside the code instead of in the IDE, but that seems like overkill and not very convenient.
I have been looking for answers online and reading through docs and could find no further.
I figured out that I can use Attach using Process Id It is however a little bit tricky because there are always 2 processes running in the list (different pid's). Its not great, but it does the trick
One technique I have found useful, albeit in a node environment but should apply here, is to make use of unit testing as a way to execute code locally with the ability to tie in a debuggerm as well as make use of mocking to stub away the external dependencies such as AWS services (S3, DynamoDB, etc). I wrote a blog post about setting this up for node but you may find it useful as a way to consider setting things up with Pythoin as well: https://serverless.com/blog/serverless-local-development/
However, in the world of serverless development, it ultimately doesn't matter how sophisticated you get with your local development environment, you will have to to test in the cloud environment as well. The unit testing technique I described is good for catching those basic syntactical and logical errors but you still will need to perform a deployment into the cloud and test in that environment. Its one of the reasons at Serverless we are working very hard on ways to improve the ability and time it takes to deploy to the cloud so that testing in AWS is replaces local testing.

What gem should I use to work with AWS

I'm currently writing an application in ruby on rails that uses AWS. I see two options for gems, aws-sdk and fog. Fog seems to support almost all of the AWS services except for sns(which I wanted to use :/) and has mock services for testing not to mention you can change out for rackspace or a different provider rather easily. Is there any big reason why I should use AWS's sdk? It supports sns, but not rds and does not come with mocking.
If I'm missing something please let me know as I am new to this.
Thanks in advance.
You may also want to checkout rightaws though unfortunately it doesn't have support for sns either. It was one of the first libraries available and provides support for most of the functionalities. However, fog is releasing new versions more often and is catching up quickly and is a bit more high level. The aws_sdk was only released recently and the main reason to go with it is that it comes from Amazon itself and will likely become the standard. This is why we included it in rubystack. We expect that people will provide higher level libraries that will build on top of it.
aws-sdk supports SNS but does not mock the services. It does hoever provide basic stubbing:
AWS.stub!
This causes all service requests to "do nothing" and return "empty responses". It is used extensively inside the specs provided with the gem. This is is not the same as mocking a service but it can be a useful testing aid.

Resources