All,
Am really stuck and have tried almost everything. Can some one please help.
I provision 2 instances while creating my Auto-scaling group. I trigger a Lambda ( manipulates the tags) which changes the instance name to a unique name.
Desired State
I want first instance of Lambda to give first instance the name "web-1"
Then second instance of lambda would run just fine to assign a name "web-2"
Current State
I start with a search on running instances to see if "web-1" exists or not.
So in this case my Lambda executes twice and creates both instances with the same name ( web-1, web-1).
How do I get around this ? I know that the problem is due to Lambda listening to Cloud Watch events. ASG Launch creates 2 events at the same time in my case leading to the problem I have.
Thanks.
You are running into a classic multi-threading issue. Both lambda functions execute simultaneously, see the same "unused" web-1 and mark both with the same function.
What you need is an atomic operation that gives each Lambda execution "permission" to proceed. You can try using a helper DynamoDB table to serialize the tag attempts.
Have your lambda function decide which tag to set (web-1, web-2, etc.)
Check a DynamoDB table to see if that tag has been set in the last 30 seconds. If so, someone else got to it first, so go back to step 1.
Try to write your "ownership" of your sought-after tag to the DynamoDB along with your current timestamp. Try using some attribute_not_exists or other DynamoDB conditions to ensure only one simultaneous such write succeeds.
If you fail at writing, go back to step 1.
If you succeed at writing, then you're free to set your tag.
The reason for the timestamps is to allow for "web-1" to be terminated, and then having a new EC2 instance launched and labelled "web-1".
The above logic is not proven to work, but hopefully should give enough guidance to develop a working solution.
Related
I have a Lambda function bound to CodeBuild notifications; a Lambda instance writes details of the notification that triggered it to a DynamoDB table (BillingMode PAY_PER_REQUEST)
Each CodeBuild notification spawns an independent Lambda instance. A CodeBuild build can spawn 7-8 separate notifications/Lambda instances, many of which often happen simultaneously.
The Lambda function uses DynamoDB:PutItem to put details of the notification to DynamoDB. What I find is that out of 7-8 notifications in a 30 second period, sometimes all 7-8 get written to DynamoDB, but sometimes it can be as low as 0-1; many calls to DynamoDB:PutItem simply seem to be "ignored".
Why is this happening?
My guess is that DynamoDB simply shouldn't be accessed by multiple Lambda instances in this way; that best practice is to push the updates to a SQS queue bound to a separate Lambda, and have that separate Lambda write many updates to DynamoDB as part of a transaction.
Is that right? Why might parallel independent calls to DynamoDB:PutItem fail silently?
TIA.
DynamoDB uses a web endpoint and for that reason it can handle any number of concurrent connections, so the issue is not with how many Lambdas are writing.
I typically see this happen when users do not allow the Lambda to wait until the API requests are complete and the container gets shut down prematurely. I would first check your code and ensure that your lambda is staying alive for all items to be processed, you can do this by adding some simple logging in your code.
What you are describing is a good use case for Step Functions.
As much as Lambda functions are great to glue between services, they have their overheads and their limitations. With Step Functions, you can call directly to DynamoDB:PutItem, and you can handle various scenarios and flows, such as Async calls. These flows are possible to implement in a Lambda function, however with less visibility and with less traceability.
BTW, you can also call a Lambda function from Step Functions, however, I recommend you to try and use the direct service call to maximize the benefits of the Step Functions service.
My mistake, I had a separate issue which was messing up some of the range keys and causing updates to "fail" silently. But thx for the tip regarding timeouts
I know there's a question with the same title but my question is a little different: I got a Lambda API - saveInputAPI() to save the value into a specified field. Users can invoke this API with different parameter, for example:
saveInput({"adressType",1}); //adressType is a DB field.
or
saveInput({"name","test"}) //name is a DB field.
And of course, this hosts on AWS so I'm also using API Gateway as well. But the problem is sometimes, an error like this happened:
As you can see. API No. 19 was invoked first but ended up finishing later
(10:10:16:828) -> (10:10:18:060)
While API No.18 was invoked later but finished sooner...
(10:10:17:611) -> (10:10:17:861)
This leads to a lot of problems in my project. And sometimes, the delay between 2 API was up to 10 seconds. The front project acts independently so users don't know what happens behind. They think they have set addressType to 1 but in reality, the addressType is still 2. Since this project is large and I cannot change this kind of [using only 1 API to update DB value] design. Is there any way for me to fix this problem ?? Really appreciate any idea. Thanks
If updates to Database can't be skipped if last updated timestamp is more recent than the source event timestamp, we need to decouple Api Gateway and Lambda.
Api Gateway writes to SQS FIFO Queue.
Lambda to consume SQS and process the request.
This will ensure older event is processed first.
Amazon Lambda is asynchronous by design. That means that trying to make it synchronous and predictable is kind of waste.
If your concern is avoiding "old" data (in a sense of scheduling) overwrite "fresh" data, then you might consider timestamping each data and then applying constraints like "if you want to overwrite target data, then your source timestamp have to be in the future compared to timestamp of the targeted data"
I've built a bit of a pipeline of AWS Lambda functions using the Serverless framework. There are currently five steps/functions, and I need them to run in order and each run exactly once. Roughly, the functions are:
Trigger function by an HTTP request, respond with an ID.
Access and API to get the URL of a resource to download.
Download that resource and upload a copy to S3.
Alter that resource and upload the altered copy to S3.
Submit the altered resource to a different API.
The specifics aren't important, but the question is: What's the best event/trigger to use to move along down this line of functions? The first one is triggered by an HTTP call, but the first one needs to trigger the second somehow, then the second triggers the third, and so on.
I wrote all the code using AWS SNS, but now that I've deployed it to staging I see that SNS often triggers more than once. I could add a bunch of code to detect this, but I'd rather not. And the problem is also compounding -- if the second function gets triggered twice, it sends two SNS notifications to trigger step three. If either of those notifications gets doubled... it's not unreasonable that the last function could be called ten times instead of once.
So what's my best option here? Trigger the chain through HTTP? Kinesis maybe? I have never worked with a trigger other than HTTP or SNS, so I'm not really sure what my options are, and which options are guaranteed to only trigger the function once.
AWS Step Functions seems pretty well targeted at this use-case of tying together separate AWS operations into a coherent workflow with well-defined error handling.
Not sure if the pricing will work for you (can be pricey for millions+ operations) but it may be worth looking at.
Also not sure about performance overhead or other limitations, so YMMV.
You can simply trigger the next lambda asynchronously in your lambda function after you complete the required processing in that step.
So, the first lambda is triggered by an HTTP call and in that lambda execution, after you finish processing this step, just launch the next lambda function asynchronously instead of sending the trigger through SNS or Kinesis. Repeat this process in each of your steps. This would guarantee single time execution of all the steps by lambda.
Eventful Lambda triggers (SNS, S3, CloudWatch, ...) generally guarantee at-least-once invocation, not exactly-once. As you noted you'd have to handle deduplication manually by, for example, keeping track of event IDs in DynamoDB (using strongly consistent reads!), or by implementing idempotent Lambdas, meaning functions that have no additional effects even when invoked several times with the same input. In your example step 4 is essentially idempotent providing that the function doesn't have any side effects apart from storing the altered copy, and that the new copy overwrites any previously stored copies with the same event ID.
One service that does guarantee exactly-once delivery out of the box is SQS FIFO. This service unfortunately cannot be used to trigger Lambdas directly so you'd have to set up a scheduled Lambda to poll the FIFO queue periodically (as per this answer). In your case you could handle step 5 with this arrangement, since I'm assuming you don't want to submit the same resource to the target API several times.
So in summary here's how I'd go about it:
Lambda A, invoked via HTTP, responds with ID and proceeds to asynchronously fetch resource from the API and store it to S3
Lambda B, invoked by S3 upload event, downloads the uploaded resource, alters it, stores the altered copy to S3 and finally pushes a message into the FIFO SQS queue using the altered resource's filename as the distinct deduplication ID
Lambda C, invoked by CloudWatch scheduler, polls the FIFO SQS queue and upon a new message fetches the specified altered resource from S3 and submits it to the other API
With this arrangement even if Lambda B is occasionally executed twice or more by the same S3 upload event there's no harm done since the FIFO SQS queue handles deduplication for you before the flow reaches Lambda C.
AWS Step function is meant for you: https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
You will execute the steps you want based on previous executions outputs.
Each task/step just need to output a json correctly in the wanted "state".
https://docs.aws.amazon.com/step-functions/latest/dg/concepts-states.html
Based on the state, your workflow will move on. You can create your workflow easily and trigger lambdas, or ECS tasks.
ECS tasks are your own "lambda" environment, running without the constraints of the AWS Lambda environment.
With ECS tasks you can run on Bare metal, on your own EC2 machine, or in ECS Docker containers on ECS and thus have unlimited resources extensible limits.
As compared to Lambda where the limits are pretty strict: 500Mb of disk, execution limited in time, etc.
I have an application where an initial lambda will spawn several asynchronous lambdas, and some of those lambdas may spawn their own asynchronous lambdas, and so on (although with a generation-counter to prevent runaways). The lambdas all currently write to a DynamoDB table. I'd like to know when the last one finishes, so as to kick off some different processes.
I can think of several ways generally:
writing specific fields to the DB, and each lambda checks if it's the last one running
SQS
SWF (Step Functions/state machines)
I'd like to know the simplest way to do this, and/or the "best" or canonical way, if there is one. Would also like to avoid SQS (although I'm going to experiment with SWF anyway, just because it sounds cool).
This is a good use case for AWS Step Functions.
The Parallel state is exactly what you need.
The Parallel state ("Type": "Parallel") can be used to create parallel branches of execution in your state machine.
Your scenario best fits with AWS Step Functions where you can define the Parallel state for Lambda steps and at the last execution to trigger the final step, which will kick of the different process. However this will simplify the state machine but will incur additional cost for individual states.
Another approach is to use DynamoDB Atomic Counters to keep track of each execution so that after the last execution, the Lambda function attached to the Stream, can identify and kick of the different process.
In Amazon-ec2, the instances page shows details of a machine like its IP, size, key-pair, security group, how long it has run etc.
once the instance is terminated, the line-item stays visible for about an hour. within this period, we can know the details of the machine as it was while running. but once the line item gets removed, there is no way to know that.
say, some instances are manually instantiated, used for some time and then terminated. after an hour of that event there is no way to find out what happened.
there is one detailed-bill feature, but it only provides the instance-ids and size. i am interested in key-pair, ip, OS, security group and name-of-machine if any. is there any way to find out them?
Edit
I understand that i can have a cron job periodically list all instances (and its details) and store it in a database. thing is, to host that cron process, i would need a machine 24x7. what i need is sort of hook, a callback, event.
even if not readily available, can such a solution be made?
Once the instance has been terminated, like you mentioned, most of the information will be available through the API before it completely disappears after an hour or so. (IP address an DNS will not be available since every time you stop or terminate an instance the IP address is relinquished) After the instance completely disappears it means that it's gone for good.
The workaround is to query the instances API every so ofter and save the state and instance information. You can save it in memory, a database or just text files, depending on what you are trying to do or what application you are trying to create.
Here's an example of saving the instance information into a Python dictionary in memory using the boto Python interface to the API:
reservations = conn.get_all_instances()
for res in reservations:
instance = res.instances[0]
if instance.id == 'i-xxxxxx':
instance_dict[instance.id] = instance
The dictionary instance_dict will always have the IP address, DNS and other instance info for the duration of your program as long as you don't overwrite it. To terminate the instance you can run something like:
instance_dict['i-xxxxxx'].terminate()
but later you can still use:
instance_dict['i-xxxxxx'].ip_address
and:
instance_dict['i-xxxxxx'].dns_name
Hope this helps.