Serverless deploying multiple functions - aws-lambda

I've recently updated my serverless project, and I've found that many things have changed in the last few updates.
https://serverless.com/
I don't fully understand whats the correct way to have multiple lambda functions and api gateway endpoints related to the same project. With the old serverless I have every lambda and endpoint as a completely seperate function, this worked pretty well for me.
I can't seem to do this anymore, if I try my second lambda function overrides my first, presumably because my "service name" for both is the same. My service name is the same because I want both rest endpoints in the same API in API Gateway. Since serverless creates the API name based on the service name.
So then I tried to add both functions to the same "Service". this worked for the most part, except that now I need to include my custom role statement for all my functions into the same role (because this one role is now being linked to all my functions). Effectively giving more permissions to each individual function than it should have. The other issue is that all my handler files for the different functions are being put into each functions deployment bundle.
So basically, I'm not sure what is the correct approach to have multiple functions that relate to the same project but are separate in functionality. It used to make sense, now doesn't.
If anybody can give me some pointers please
Thanks

I understand your frustration. I had the same feeling until I looked deeper into the new version and formed a better understanding. One thing to note though, is the new version is not completely finished yet. So if something is completely missing, you can file an issue and have it prioritized before 1.0 is out.
You are supposed to define multiple functions under the same service under the functions: section of serverless.yml. To package these functions individually (exclude code for other functions) you will have to set individually: true under package: section. You can then use include and exclude options at the root level and at the function level as well. There's an upcoming change that will let you use glob syntax in your include and exclude options (example **/*-fn.js). You can find more about packaging here https://www.serverless.com/framework/docs/providers/aws/guide/deploying.
Not sure how to use different roles for different functions under the same service.. How did you do it with 0.5?

I was trying to find a solution for individual iam roles per function as well. I couldn't find a way to do it, but while I was looking through the documentation I found the line: "Support for separate IAM Roles per function is coming soon." on this page, so at least we know they are working on it.

The "IAM Roles Per Function" plugin for Serverless allows you to do exactly what it says on the tin: specify roles for each function. You can still use the provider-level roles as well:
By default, function level iamRoleStatements override the provider level definition. It is also possible to inherit the provider level definition by specifying the option iamRoleStatementsInherit: true
EDIT: You can also apply a predefined AWS role at both the provider and function level.

Related

Vaadin UI between sessions

When I am creating a new session (or try to access from an other computer) in Vaadin Flow I get this error:
Can't move a node from one state tree to another
From this link, I read something about UI and getUIId().
However, I don't understand how I should change my application in order to fix the error.
As Denis mentioned in the forum post you linked, wrong scope sounds like the most likely culprit. In other words, you are trying to use the exact same component instance in two different UIs, when both UIs should have their own instance. It's not possible to use the same instance in two places at the same time.
You can find the documentation for Vaadin Spring scopes here: https://vaadin.com/docs/latest/flow/integrations/spring/scopes
One possible cause of errors such as that is that if you're storing a Component in a static variable. You shouldn't do that - a Component instance can only belong to a single UI. A single UI in turn (in practice) means a single browser tab.

AWS SAM APIGatewayProxyRequestEvent HttpMethod not set when starting API locally

I created an application with a AWS::SERVERLESS::FUNCTION, which has an HttpApi Event attached to it. I thought it would be a good idea to create one lambda per resource, so e.g. Post, Get and Put on /customer are all handled by a single lambda, which decides which action to take using
switch (input.getHttpMethod()) {
case "GET": ...
case "POST: ...
}
So now coming to my problem: When starting an application using sam local start-api my lambda gets called correctly, but neither input.getHttpMethod() nor input.getRequestContext().getHttpMethod() is set.
Given that SAM supports multiple HttpApi events, failing to provide the http method when running the application locally mitigates local development virtually completely. Am I doing something wrong, or is this really not working? I'm using Java btw, I can't tell if this problem exists using other languages as well.
Just in Case:
Is my "one lambda per resource" approach wrong, should every single action have its own lambda?
I found the problem. HttpApi uses "PayloadFormatVersion: 2.0" as default. input was of type com.amazonaws.services.lambda.runtime.events.APIGatewayProxyRequestEvent, but this represents format 1.0, I had to use com.amazonaws.services.lambda.runtime.events.APIGatewayV2HTTPEvent to get it to work.
This fails silently because instances are just constructed from json input, and several fields are the same among both classes.

How to group users together by aws-cognito attributes?

I am trying to group users of my application together under a 'Company'.
I have done this in other applications by giving the user's an account attribute/property called "company", which equals a string.
Then all data associated with that company is available to that user.
How can I do this with the AWS Amplify framework?
After some research I was able to figure this out.
In case anyone else runs into this problem...
What I was looking for was what AWS Amplify classifies as a 'Custom Resolver'. Essentially the resolver is the API logic for the GraphQL server on AWS' end.
Within your Amplify project structure there should be a folder called 'Resolvers'.
Mine was in
/backend/api/[API_NAME}/resolvers
Inside of this folder you are able to place different types of customer resolver logic for your backend.
Ideally you would place two custom files for every custom endpoint.
The two custom files would be as follows:
Query.listSomeTable.req.vtl
Query.listSomeTable.res.vtl
OR
Mutation.createSomeTable.req.vtl
Mutation.createSomeTable.res.vtl
These two files will override the resolver logic that AWS produces automatically. The files are in Apache's Velocity Engine format; '.vtl'.
You can read more about it here:
https://aws-amplify.github.io/docs/cli-toolchain/graphql#add-a-custom-resolver-that-targets-a-dynamodb-table-from-model

Dynamic Validation on K8s Configuration Files (YAML) Using Custom Rules

I am looking for a static validator that validates Kubernetes deployment or service yaml files based on custom rules. For example, I can have a rule to disallow some fields in the yaml files (although they are valid fields in K8s), or specify a range for values of a field. The validation is triggered independent of kubectl.
The closest solution I found is this kube-lint: https://github.com/viglesiasce/kube-lint. However, it does not seem to be supported since the last commit is March 2017.
Can anyone let me know if there is anything else that does the dynamic validation on K8s yaml files based on custom rules?
I believe the thing you are looking for is an Admission Controller and its two baked-in kinds "validating" and "mutating." However, as the docs say, if that's not powerful enough for your needs there is also Dynamic Admission Controller.
Be sure to watch Pod Security Policies as it matures out of beta (or, I guess, try it even now)
I haven't ever used them to know what the user experience is like (such as: does kubectl offer a friendly message, or just "401: Nope" kind of thing?), but as for the "disallow some fields" part, I am pretty confident they will do exactly as you wish.

Using "check" package causes another package to error

I'm using the Check package to validate parameters passed to Meteor methods. And I'm using Audit argument checks to enforce this.
However, I've added another package, Meteor Tags and when I try to use methods from the Tags package, I get a server error "Exception while invoking method '/patterns/addTag' Error: Did not check() all arguments during call to '/patterns/addTag'".
I think I understand why this error happens - the method in the Tags package doesn't check its inputs, so Audit Argument Checks generates an error. But I can't find any way around this, apart from 1) don't enforce checking, or 2) hack the Tags package methods so they use check. Neither of these seems like a great option - checking server parameters is a good idea, and hacking a package is not very maintainable.
Does anybody know if there is any smart way to use 'Audit argument checks' with packages that provide new server methods? I have looked at the Check documents, and searched online, but I haven't found an answer.
I hope this question makes sense.
Using audit-argument-checks is like saying: "I want to be serious about the security of the methods in my app." It's global to all methods in your app's codebase, including the methods from your installed packages.
There is no way to specify which parts of the app get checked, as that would be the equivalent of saying: "I want to be serious about the security of the methods I've written, but I don't care about the security holes created by some pacakges" (which doesn't make a lot of sense).
Note to package authors
Check your method arguments. It's not hard, and it prevents this situation from happening. Frankly, a package without this basic security really shouldn't be installed in the first place.
What you should do
Unless you have a throwaway app, I wouldn't recommend removing audit-argument-checks. Instead I'd do the following (assuming the package really has something of value):
Open an issue on github and let the maintainer know what's up.
Fork the code, and add the required checks. Keep this version as a local package.
Submit a pull request for the changes.
If all goes well, your PR will be accepted and everyone can benefit from the change. In the worst case, you'll still have a local copy that you can use in your app.

Resources