Lambda Management getting timed out - aws-lambda

I'm trying to build a Alexa skill in python. And for the same I compressed all my files into a zip. But when I try to upload the zip file to lambda I'm getting this error:
timeout of 61000ms exceeded
And I'm not even able to save my files to lambda Management.
I tried deleting the function and recreating another but no help. I even tried making another account for the same purpose but still not getting anywhere.
Is there any bug in aws lambda management for python runtime? or is it my code? (But I'm not even able to execute my code).

I faced the similar issue while doing this.
I had to increase the timeout to rectify it.
However, I would suggest that you use S3 for zip file storage and the provide the S3 URL on the Lambda Management Console.
Set the timeout as in the following screenshot
OR
Upload to S3 first and give the S3 URL as follow

Related

AWS Chainlink Quickstart Error - S3 Error Access Denied

I am getting the following error when executing the Quickstart for Chainlink during the AuroraStack execution.
S3 Error: Access Denied
Not very friendly error so I went over to the yaml file.
https://aws-quickstart.s3.us-east-1.amazonaws.com/quickstart-chainlinklabs-chainlink-node/submodules/quickstart-amazon-aurora-postgresql/templates/aurora_postgres.template.yaml
And gave it a read but I really don't see anything that even has it hitting an S3 storage resource.
The error would lead me to believe that the previous YAML file that calls the above is being called and can't even reach the S3 file for Aurora.
Anyone else seen / resolved this issue?
Any ideas are appreciated?
Thanks!
Chris
Ultimately, it was an S3 access issue to the Aurora file inside the chainlink quickstart. I setup an S3 bucket, redid the files and gave myself permission and it worked fine.
Ultimately, it was an S3 access issue to the Aurora file inside the chainlink quickstart. I setup an S3 bucket, redid the files and gave myself permission and it worked fine.

AWS Lambda's: SAM deployment ...identifying and removing old S3 package versions?

I'm relatively new to AWS lambda's and SAM, and now I've got things working I've got a seemingly simple question I can't find an answer to.
I've spent the last week getting a lambda app up and running using SAM (build, package, deploy numerous times until it works).
Problem
So now my S3 bucket I'm using to upload to has numerous (100 or so) previously uploaded (by sam package) versions of my zip'd up code.
Question
How can you identify which zipped up packages are the current ones (ie used by a current function and/or layer), and remove all the old obsolete ones?
Is there a way in SAM (cmd line options or in the template files) to
have it automatically delete old versions of your package when you
'sam package' upload a new version?
Is there somewhere in the AWS console to find the key for which zip file in your bucket a current function or layer is using? (I tried everywhere to find that, but couldn't manage to ...it's easy to get the ARN's, but not what the actual URI in your bucket that maps to)
Slight Complication
In the bucket I'm using to store the lambda packages, I've also got a custom layer.
So if it was just the app packages, I could easily (right now) just go in and delete everything in the bucket then do a re-build/package/deploy to clean it. ...but that would also delete my layer (and - same problem - I'm now sure which zip file in the bucket the layer is using).
But that approach wouldn't work long term anyway, as I'm planning to put together approx 10-15 different packages/functions, so deleting everything in the bucket when just one of them is updated is not going to work.
thanks for any thoughts, ideas and help!
1.In your packaged.yaml (generated after invoking sam package) file you can see under each lambda function a CodeUri with unique path s3://your bucket/id . the id is the one used by the current function and/or layer and resides in your bucket.
In layer it's ContentUri.
2.automatically delete old versions of your package when you 'sam package' upload a new version - i'm not aware of something like that.
3.Through AWS console you can see your layer version i don't think there is an indication of your function/layer CodeUri/ContentUri .
You can try to compare the currently deployed stack with what you've stored in S3. Let's assume you have a stack called test-stack, then you can retrieve the processed stack from CloudFormation using the AWS CLI like this:
AWS_PAGER="" aws cloudformation get-template --stack-name test-stack \
--output json --template-stage Processed
To only get the processed template body, you may want to pipe the output again through
jq -r ".TemplateBody"
Now you have the processed CFN template that tells you which S3 buckets and keys it is using. Here is an example for a lambda function:
MyLambda:
Type: 'AWS::Lambda::Function'
Properties:
Code:
S3Bucket: my-bucket
S3Key: 0c53a7ccb1c1762eaeebd96555d13a20
You can then try to delete s3 objects that are not referenced by the current stack.
There used to be a github ticket requesting some sort of automatic cleanup mechanism but it has been closed as it was out of scope https://github.com/aws/serverless-application-model/issues/557#issuecomment-417867028
It may be worth noting that you could also try to setup a S3 lifecycle rule to automatically clean up old s3 objects as suggested here: https://github.com/aws/aws-sam-cli/issues/648 However, I don't think that this will always be a suitable solution.
Last but not least, there has been an attempt to include some automatic cleaning approach in the sam documentation, but it was dismissed as:
[...] there are certain use cases that require these packaged S3 objects to persist, and deleting them would cause significant problems. One such example is the "CloudFormation stack deployment rollback" scenario: 1) Deploy version N of a stack, 2) Delete the packaged S3 object that version N uses, 3) Deploy version N+1 with a "bad" template file that triggers a CloudFormation rollback.
https://github.com/awsdocs/aws-sam-developer-guide/pull/3#issuecomment-462993286
So while it is possible to identify obsolete S3 packaged versions, it might not always be a good idea to delete them after all...
Actually, CloudFormation (which SAM is based on) uses S3 as temporary storage only. When you create or update the Lambda function, a copy of the code is made, so you could delete all objects from the bucket and the Lambda function would still work correctly.
Caveat: there are cases where the S3 object may be required, for example to rollback a CloudFormation stack. For example the "CloudFormation stack deployment rollback" scenario (reference):
Deploy version N of a stack
Delete the packaged S3 object that version N uses
Deploy version N+1 with a "bad" template file that triggers a CloudFormation rollback

How to decrease the size of serverless deploy?

I'm deploying an aws lambda function using serverless framework. My problem is there is a large file (44MB) that is deployed every time I do sls deploy -f any_fn. I've had similar problems when there is a node_modules folder (which can be quite big).
Is there a way to reduce the upload size by uploading the common files only once (and for all functions)? Because right now it keeps zipping and deploying that same binary file again and again though it never changes.
There's no way to do what you propose. AWS Lambda requires you to upload the entire package including all dependencies each time. Everything has to be inside the zip file that is deployed to Lambda.
This was my solution:
Upload the large files to S3 bucket.
Download all S3 files in a function executed under the global scope, not in the scope
of exports.handler so the code will be executed only one time (for
container).
To make sure re-use of the container you should keep
the lambda warm using CloudWatch timer with two simple steps.
This allow you to deploy only the small files.
You can try using lambda layers. All you need to do is create separate serverless project for dependencies management for ex. node_modoles and rest of the services will refer to it (follow docs). This should reduce the deployment or package size of individual lambda significantly.
Use lambda containers and your problems will be solved! Lambda containers have a 10 GB image size limmit! You can add anything you want in there! I've made many express apps with
Serverless http
and lambda containers.
You can also add an efs to your lambda and acess your files from there.
Check this tutorial

How can i upload image files to aws ec2 instance?

My web application server on AWS ec2 instance.
And using MEAN stack.
I'd like to upload image to ec2 instance.(ex - /usr/local/web/images)
I can't found that how can i get the credentials.
There are just about AWS S3.
How can i upload image file to ec2 instance?
If you do file transfer repeatedly try, unison. It is bidirectional, kind of sync. Allows options to handle conflicts.
I've found the easiest way to do this as a one-off is to upload the file to google drive, and then download the file from there. View this thread to see how simply this can be done!

Local file blob save

I am a bit stuck with this Windows Azure Blob storage.
I have a controller that receive a file path (local).
So on the web page I do something loke this:
http:...?filepath=C:/temp/myfile.txt
On the web service I want to get this file and put it on the blob service. When I launch it in local there is no problem but when i publish it there is no way to get the file. I always get:
Error encountered: Could not find a part of the path 'C:/temp/myfile.txt'.
Can someone help me. Is there a solution ?
First i would say to get proper help you would need to provide better description about your problem. What do you mean by "On the web service"? Is it a WCF web role which seems to match with your partial problem description. However most of the web service use http://whatever.cloudapp.net/whatever.svc as well as http://whatever.cloudapp.net/whaterever.aspx?whatever if added. Have you done something like that in your application.
You have also mentioned the controller in your code which makes me think it is a MVC based Web Role application.
I am writing above information to help you to formulate your question much better next time.
Finally Based on what you have provided you are reading a file from local file system (C:\temp\myfile.txt) and uploading to Azure Blob. This will work in compute Emulator and sure will fail in Windows Azure because:
In your Web Role code you will not have access to write on C:\ drive and that's why file is not there and you get error. Your best bet is to use Azure Local Storage to write any content there and then use Local Storage to read the file and then upload the Azure Blob. Azure Local storage is designed to write any content from web role (you will have write permission).
Finally, I am concern with your application design also because Azure VM are no persisted so having a solution to write to anywhere in VM is not good and you may need to directly write to Azure storage without using system memory, if that is possible anyways.
Did you verify the file exists on the Azure server?

Resources