Why does `aws organizations list-roots` return a list instead of a single value? - aws-organizations

The AWS CLI aws organizations list-roots returns a list.
But given an AWS Account/login, it can only belong to 0 or 1 AWS Organization, right? Why did it return a list then (is there ever going be 2 or more items)?

I guess it's for forward-compatibility. In the future, AWS Organizations might support several roots, and then the API would be prepared for that.

Related

AWS cost one vs many lambda functions

I am currently developing an API management platform where it is possible to move every endpoint action to a serverless function (lambda).
My question is: It is cheaper to use a single lambda function which then invokes the complete app and the app makes internally the routing or it is better to use the AWS routing and create a lambda for each endpoint, in my case this could be (100+) lambda instances.
From a technical perspective I think it is better to have multiple lambda functions since then we can also scale each function independently but I am not sure how it looks on the costs side. So please let me know if you have any experiences.
Look here:
https://s3.amazonaws.com/lambda-tools/pricing-calculator.html
The most important thing to keep in mind, is run short time functions at lambda, slow executions can increase your budget! But many lambda fast invokes no!
You need to know your executions time! To maintain a very large set of lambda functions i recommend to you:
https://www.serverless.com/

Best method to persist data from an AWS Lambda invocation?

I use AWS Simple Email Services (SES) for email. I've configured SES to save incoming email to an S3 bucket, which triggers an AWS Lambda function. This function reads the new object and forwards the object contents to an alternate email address.
I'd like to log some basic info. from my AWS Lambda function during invocation -- who the email is from, to whom it was sent, if it contained any links, etc.
Ideally I'd save this info. to a database, but since AWS Lambda functions are costly (relatively so to other AWS ops.), I'd like to do this as efficiently as possible.
I was thinking I could issue an HTTPS GET request to a private endpoint with a query-string containing the info. I want logged. Since I could fire my request async. at the outset and continue processing, I thought this might be a cheap and efficient approach.
Is this a good method? Are there any alternatives?
My Lambda function fires irregularly so despite Lambda functions being kept alive for 10 minutes or so post-firing, it seems a database connection is likely slow and costly since AWS charges per 100ms of usage.
Since I could conceivable get thousands of emails/month, ensuring my Lambda function is efficient is paramount to cost. I maintain 100s of domain names so my numbers aren't exaggerated. Thanks in advance.
I do not think that thousands per emails per month should be a problem, these cloud services have been developed with scalability in mind and can go way beyond the numbers you are suggesting.
In terms of persisting, I cannot really understand - lack of logs, metrics - why your db connection would be slow. From the moment you use AWS, it will use its own internal infrastructure so speeds will be high and not something you should be worrying about.
I am not an expert on billing but from what you are describing, it seems like using lambdas + S3 + dynamoDB is highly optimised for your use case.
From the type of data you are describing (email data) it doesn't seem that you would have neither a memory issue (lambdas have mem constraints which can be a pain) or an IO bottleneck. If you can share more details on your memory used during invocation and the time taken that would be great. Also how much data you store on each lambda invocation.
I think you could store jsonified strings of your email data in dynamodb easily, it should be pretty seamless and not that costly.
Have not used (SES) but you could put a trigger on DynamoDB whenever you store a record, in case you want to follow up with another lambda.
You could combine S3 + dynamoDB. When you store a record, simply upload a file containing the record to a new S3 key and update the row in DynamoDB with a pointer to the new S3 object
DynamoDB + S3
You can now persist data using AWS EFS.

Why AWS Lambda suggests to set up two subnets if VPC is configured?

Is this because of the IP availability?
I've always thought that creating a single huge subnet instead of creating two with the same size, is exactly the same. I haven't experienced any perfomance issues by doing this, but I haven't found anything in the docs to confirm that this is a valid way.
Why AWS Lambda suggests to configure these two subnets? is there a technical reason for that?
Thanks in advance.
It is not "only" for performance, it is for high availability(more fault-tolerant) according to here
It's a best practice to create multiple private subnets across different Availability Zones for redundancy and so that Lambda can ensure high availability for your function.
Resilience documentation

AWS Lambda Reserved and Unreserved Concurrency Alarm

In our set up, we have lots of AWS Lambda functions, developed by different teams. Some of the them have set a reserved concurrency. This eats out of the total concurrency of the account (1000).
Is there a way to monitor or set an alarm that is triggered when the unreserved currency drops below specific level?
This would be helpful to proactively do something to alleviate the issue and reduce failures.
In AWS there are pre-defined metrics, related to Lambda concurrency, that are exposed in AWS CloudWatch
ConcurrentExecutions: Shows you the concurrent executions that are happening at that moment across the all the Lambda functions in the Account including Reserved and Unreserved.
UnreservedConcurrentExecutions: This shows you the total concurrent executions, that are happening at that moment, that are using the Unreserved Concurrency.
The information I was looking for can be seen when we run the CLI command
ConcurrentExecutions and UnreservedConcurrentExecutions
$ aws lambda get-account-settings
{
"AccountLimit": {
"TotalCodeSize": 1231232132,
"CodeSizeUnzipped": 3242424,
"CodeSizeZipped": 324343434,
"**ConcurrentExecutions**": 10000,
"**UnreservedConcurrentExecutions**": 4000
},
"AccountUsage": {
"TotalCodeSize": 36972950817,
"FunctionCount": 1310
}
}
It is not possible to get these values in a dashboard. As we cannot execute API calls to fetch and display data in the dashboard.
Solution
We can create a lambda function, and, in that function, we can extract, using the API, the account wide values/settings for ConcurrentExecutions and UnreservedConcurrentExecutions. We can then create new metrics that would send the two values to CloudWatch. We can schedule AWS Lambda Functions Using CloudWatch Events.
Once we have the metric, we can set the required alarm for the Unreserved Concurrency.

Access running aws lambda function

I want to move Celery that makes excels to AWS-Lambda.
So I've been looking AWS-Lambda.
But not found about get lambda "state".
In Celery, every tasks have task_id and can access "state" from task_id and also update "state" in tasks.
AWS-Lambda can't access when is running ?
AWS Lambda functions are stateless, the are purely functions as a service , if you want to have state, you might want to use step functions, which means that they provide state as a service.
For more information about step fucntions, read here.

Resources