How to make EC2 instance call another instance? - amazon-ec2

I have two EC2 instances. I want that if one finish a job, it will sign the other one to do other stuff.
So, how to make the communication? I don't want to use CURL.. coz it seems like expensive. I think AWS should have some simple solution but I still can't find relevant help in the documentation.
:(
also, how to send data between two instances without giong through SSH in a fast way? I know ssh can be done. but it seems slow. once again, any tool that EC2 provide to do that?
Actually, I need two methods:
1) Instance A tells Instance B to grab the data from Instance A.
This is answered by Adrian that I can use SQS. I will try that.
2) Once Instance B get the signal, then the data (EBS) data in Instance A needs to transfer to Instance B. The amount of data can be big even I zip it. It is around 50 MB. And I need Instance B to get the data fast so that Instance B will have enough time to process the data before next interval comes in.
So, I am thinking of either these methods:
a) Instance A has the data dump from DB, upload to S3. Then signal Instance B. Instance B gets the data from S3.
b) Instance A has the data dump from DB. Then signal Instance B. Instance B establish SSH (or any connection) to Instance A and grabs the data.
The data may need to be stored permanently but it is not a concern at this moment. It is mainly for Instance B to process.
This is a simple scenario. I'm thinking of what if I scale it with multiple instances, what the proper approach is. :)
Thanks.

Amazon has a special service for this -- it's called SQS, and it allows instances to send messages to each other through special queues. There are SDKs for SQS in various languages, like Java and PHP. This should serve your signaling needs.
For actually sending the bulky data over, it's best to use S3 (and send the object key in the SQS message). You're right that you're introducing latency by adding the extra middle-man, but you'll find that S3 is very fast from EC2 instances (if you put them in the same availability zone, that is), and more importantly than performance, S3 is very reliable. If you try to manage the transfer yourself through SSH, you'll have to work out a lot of error checking and retry logic that S3 handles for you. You can use S3FS to easily write and read to/from S3 from EC2.
Edited to address your updated question.

You may want to look at SNS... which is kind of like push SQS.

How fast do you need this communication to be? SSH is pretty darn fast. The only thing that I can think of that might be faster is raw sockets (from within whatever program is running the jobs).

You could use a distributed workflow managing service.
If Instance B has already completed the task, it can go on to pick another task. Usually, you would want Instance B to signal that is has "picked" up a task and is doing it. Then other instances should try to pick up other tasks on your list. You need a central service which knows which task has been picked up already, and which ones are left for grabs.
When Instance B completes the task successfully, it should signal the central service that it is free for a new task, and pick one up if there is something left.
If it fails to complete the task, the central service should be able to detect it (via heartbeats and timeouts you defined) and put the task back on the list so that some other instance can pick it up.
Amazon SWF is the central service which will provide you with all of this.
For data required by each instance, you should put it in a central store like s3, and configure s3 paths in a way such that each task knows where to download data from, without having to sync up.
e.g. data for task 1 could be placed in something like s3://my-bucket/task1

Related

Best method to persist data from an AWS Lambda invocation?

I use AWS Simple Email Services (SES) for email. I've configured SES to save incoming email to an S3 bucket, which triggers an AWS Lambda function. This function reads the new object and forwards the object contents to an alternate email address.
I'd like to log some basic info. from my AWS Lambda function during invocation -- who the email is from, to whom it was sent, if it contained any links, etc.
Ideally I'd save this info. to a database, but since AWS Lambda functions are costly (relatively so to other AWS ops.), I'd like to do this as efficiently as possible.
I was thinking I could issue an HTTPS GET request to a private endpoint with a query-string containing the info. I want logged. Since I could fire my request async. at the outset and continue processing, I thought this might be a cheap and efficient approach.
Is this a good method? Are there any alternatives?
My Lambda function fires irregularly so despite Lambda functions being kept alive for 10 minutes or so post-firing, it seems a database connection is likely slow and costly since AWS charges per 100ms of usage.
Since I could conceivable get thousands of emails/month, ensuring my Lambda function is efficient is paramount to cost. I maintain 100s of domain names so my numbers aren't exaggerated. Thanks in advance.
I do not think that thousands per emails per month should be a problem, these cloud services have been developed with scalability in mind and can go way beyond the numbers you are suggesting.
In terms of persisting, I cannot really understand - lack of logs, metrics - why your db connection would be slow. From the moment you use AWS, it will use its own internal infrastructure so speeds will be high and not something you should be worrying about.
I am not an expert on billing but from what you are describing, it seems like using lambdas + S3 + dynamoDB is highly optimised for your use case.
From the type of data you are describing (email data) it doesn't seem that you would have neither a memory issue (lambdas have mem constraints which can be a pain) or an IO bottleneck. If you can share more details on your memory used during invocation and the time taken that would be great. Also how much data you store on each lambda invocation.
I think you could store jsonified strings of your email data in dynamodb easily, it should be pretty seamless and not that costly.
Have not used (SES) but you could put a trigger on DynamoDB whenever you store a record, in case you want to follow up with another lambda.
You could combine S3 + dynamoDB. When you store a record, simply upload a file containing the record to a new S3 key and update the row in DynamoDB with a pointer to the new S3 object
DynamoDB + S3
You can now persist data using AWS EFS.

How can I trigger one AWS Lambda function from another, guaranteeing the second only runs once?

I've built a bit of a pipeline of AWS Lambda functions using the Serverless framework. There are currently five steps/functions, and I need them to run in order and each run exactly once. Roughly, the functions are:
Trigger function by an HTTP request, respond with an ID.
Access and API to get the URL of a resource to download.
Download that resource and upload a copy to S3.
Alter that resource and upload the altered copy to S3.
Submit the altered resource to a different API.
The specifics aren't important, but the question is: What's the best event/trigger to use to move along down this line of functions? The first one is triggered by an HTTP call, but the first one needs to trigger the second somehow, then the second triggers the third, and so on.
I wrote all the code using AWS SNS, but now that I've deployed it to staging I see that SNS often triggers more than once. I could add a bunch of code to detect this, but I'd rather not. And the problem is also compounding -- if the second function gets triggered twice, it sends two SNS notifications to trigger step three. If either of those notifications gets doubled... it's not unreasonable that the last function could be called ten times instead of once.
So what's my best option here? Trigger the chain through HTTP? Kinesis maybe? I have never worked with a trigger other than HTTP or SNS, so I'm not really sure what my options are, and which options are guaranteed to only trigger the function once.
AWS Step Functions seems pretty well targeted at this use-case of tying together separate AWS operations into a coherent workflow with well-defined error handling.
Not sure if the pricing will work for you (can be pricey for millions+ operations) but it may be worth looking at.
Also not sure about performance overhead or other limitations, so YMMV.
You can simply trigger the next lambda asynchronously in your lambda function after you complete the required processing in that step.
So, the first lambda is triggered by an HTTP call and in that lambda execution, after you finish processing this step, just launch the next lambda function asynchronously instead of sending the trigger through SNS or Kinesis. Repeat this process in each of your steps. This would guarantee single time execution of all the steps by lambda.
Eventful Lambda triggers (SNS, S3, CloudWatch, ...) generally guarantee at-least-once invocation, not exactly-once. As you noted you'd have to handle deduplication manually by, for example, keeping track of event IDs in DynamoDB (using strongly consistent reads!), or by implementing idempotent Lambdas, meaning functions that have no additional effects even when invoked several times with the same input. In your example step 4 is essentially idempotent providing that the function doesn't have any side effects apart from storing the altered copy, and that the new copy overwrites any previously stored copies with the same event ID.
One service that does guarantee exactly-once delivery out of the box is SQS FIFO. This service unfortunately cannot be used to trigger Lambdas directly so you'd have to set up a scheduled Lambda to poll the FIFO queue periodically (as per this answer). In your case you could handle step 5 with this arrangement, since I'm assuming you don't want to submit the same resource to the target API several times.
So in summary here's how I'd go about it:
Lambda A, invoked via HTTP, responds with ID and proceeds to asynchronously fetch resource from the API and store it to S3
Lambda B, invoked by S3 upload event, downloads the uploaded resource, alters it, stores the altered copy to S3 and finally pushes a message into the FIFO SQS queue using the altered resource's filename as the distinct deduplication ID
Lambda C, invoked by CloudWatch scheduler, polls the FIFO SQS queue and upon a new message fetches the specified altered resource from S3 and submits it to the other API
With this arrangement even if Lambda B is occasionally executed twice or more by the same S3 upload event there's no harm done since the FIFO SQS queue handles deduplication for you before the flow reaches Lambda C.
AWS Step function is meant for you: https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
You will execute the steps you want based on previous executions outputs.
Each task/step just need to output a json correctly in the wanted "state".
https://docs.aws.amazon.com/step-functions/latest/dg/concepts-states.html
Based on the state, your workflow will move on. You can create your workflow easily and trigger lambdas, or ECS tasks.
ECS tasks are your own "lambda" environment, running without the constraints of the AWS Lambda environment.
With ECS tasks you can run on Bare metal, on your own EC2 machine, or in ECS Docker containers on ECS and thus have unlimited resources extensible limits.
As compared to Lambda where the limits are pretty strict: 500Mb of disk, execution limited in time, etc.

AWS Lambda and Kinesis Client Library (KCL)

How come I find so little examples of the KCL being used with AWS Lambda.
https://docs.aws.amazon.com/streams/latest/dev/developing-consumers-with-kcl.html
It does provide a fine implementation for keeping track of your position on the stream (checkpointing).
I want to use the KCL as a consumer. My set-up is a stream with multiple shards. On each shard a Lambda is consuming. I want to use the KCL in the Lambda's to track the position of the iterator on the shard.
Why can't I find anyone who use the KCL with Lambda.
What is the issue here?
Since you can directly consume from Kinesis in your lambdas (using Kinesis as event source) it doesn't make any sense to use KCL within lambda. The event source framework that AWS has built must be using something like KCL to bring lambdas up in response to kinesis events.
It would be super weird to bring up a lambda, initialize KCL in the handler and wait for events during the lambda runtime. Lambda will go down in 5 mins and you'll again do the same thing. Doing this from an EC2 instance makes sense but then you're reimplementing the Lambda - Kinesis integration by yourself. That is what Lambda is, behind the scene.
I do not work for AWS, so obviously I do not know the exact reason why there is no documentation, but here are my thoughts.
First of all, to run the KCL, you need to have the JVM running. This means you can only do this in a lambda using Java because (to my knowledge at this point) there is no way to pull in other sdk, runtimes, etc into a lambda. You chose one runtime at setup. So already they would only be creating documentation for just java lambdas.
Now for the more technical reason. You need to think about what a lambda is doing, and then what the KCL is doing.
Let's start with the Lambda. Lambdas are by design, ephemeral. They can (and will) spin up and go down consistently throughout the day. Of course, you could set up a warming scheme so the lambdas stay up, but they will still have the ephemeral nature to them and this is completely out of your control. In other words, AWS controls when and if a lambda stays active and the exact methods for this is not published. So you can only try to keep things warmed.
What does a KCL do?
Connects to the stream
Enumerates the shards
Coordinates shard associations with other workers (if any)
Instantiates a record processor for every shard it manages
Pulls data records from the stream
Pushes the records to the corresponding record processor
Checkpoints processed records
Balances shard-worker associations when the worker instance count changes
Balances shard-worker associations when shards are split or merged
After reading through this list, lets now go back to the ephemeral nature of lambdas. This means that every single time a lambda comes up or goes down, all of this work needs to happen. This includes a complete rebalance between the shards and workers, pulling data records from the streams, setting checkpoints, etc. You would also need to make sure that you don't ever have more lambdas spun up than the number of shards as they would be worthless (never used in the best case or registered as workers in the worst case potentially causing lost messages. Think what would happen in this scenario with a rebalance.)
OK, technically could you pull it off? If you used Java and you did everything in your power to keep your lambdas warm, it could technically be possible. But back to your question. Why is there no documentation? I never want to say 'never', but generally speaking, Lambdas, with their ephemeral nature, are just not the best use case for the KCL. And if you don't go deep into the weeds on how the KCL works, you'll probably miss something, causing rebalancing issues and potentially causing messages to get lost.
If there is anything inaccurate here please let me know so I can update. Thanks and I hope this helps somebody.

Amazon Web Services: Spark Streaming or Lambda

I am looking for some high level guidance on an architecture. I have a provider writing "transactions" to a Kinesis pipe (about 1MM/day). I need to pull those transactions off, one at a time, validating data, hitting other SOAP or Rest services for additional information, applying some business logic, and writing the results to S3.
One approach that has been proposed is use Spark job that runs forever, pulling data and processing it within the Spark environment. The benefits were enumerated as shareable cached data, availability of SQL, and in-house knowledge of Spark.
My thought was to have a series of Lambda functions that would process the data. As I understand it, I can have a Lambda watching the Kinesis pipe for new data. I want to run the pulled data through a bunch of small steps (lambdas), each one doing a single step in the process. This seems like an ideal use of Step Functions. With regards to caches, if any are needed, I thought that Redis on ElastiCache could be used.
Can this be done using a combination of Lambda and Step Functions (using lambdas)? If it can be done, is it the best approach? What other alternatives should I consider?
This can be achieved using a combination of Lambda and Step Functions. As you described, the lambda would monitor the stream and kick off a new execution of a state machine, passing the transaction data to it as an input. You can see more documentation around kinesis with lambda here: http://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html.
The state machine would then pass the data from one Lambda function to the next where the data will be processed and written to S3. You need to contact AWS for an increase on the default 2 per second StartExecution API limit to support 1MM/day.
Hope this helps!

How to ensure data is eventually written to two Azure blobs?

I'm designing a multi-tenant Azure Service Fabric application in which we'll be storing event data in Azure Append-Only blobs.
There'll be two kinds of blobs; merge blobs (one per tenant); and instance blobs (one for each "object" owned by a tenant - there'll be 100K+ of these per tenant)
There'll be a single writer per instance blob. This writer keeps track of the last written blob position and can thereby ensure (using conditional writes) that no other writer has written to the blob since the last successful write. This is an important aspect that we'll use to provide strong consistency per instance.
However, all writes to an instance blob must also eventually (but as soon as possible) reach the single (per tenant) merge blob.
Under normal operation I'd like these merge writes to take place within ~100 ms.
My question is about how we best should implement this guaranteed double-write feature:
The implementation must guarantee that data written to an instance blob will eventually also be written to the corresponding merge blob exactly once.
The following inconsistencies must be avoided:
Data is successfully written to an instance blob but never written to the corresponding merge blob.
Data is written more than once to the merge blob.
Most easiest way as for me is to use events: Service Bus or Event Hubs or any other provider to guaranty that an event will be stored and reachable at least somewhere. Plus, it will give a possibility to write events to Blob Storage in batches. Also, I think it will significantly reduce pressure on Service Fabric and will allow to process events at desired timing.
So you could have a lot of Stateless Services or just Web Workers that will pick up new messages from a queue and in batch send them to a Statefull Service.
Let's say that it will be a Merge service. You would need to partition these services and the best way to send a batch of events grouped by one partition is to make such Stateless Service or Web Worker.
Than you can have a separate Statefull Actor for each object. But on your place I would try to create 100k actors or any other real workload and see how expensive it would be. If it is too expensive and you cannot afford such machines, then everything could be handled in another partitioned Stateless Service.
Okay, now we have the next scheme: something puts logs into ESB, something peaks these evetns from ESB in batches or very frequently, handling transactions and processing errors. After that something peaks bunch of events from a queue, it sends it to a particular Merge service that stores data in its state and calls particular actor to do the same thing.
Once actor writes its data to its state and service does the same, then such sevent in ESB can be marked as processed and removed from the queue. Then you just need to write stored data from Merge service and actors to Blob storage once in a while.
If actor is unable to store event, then operation is not complete and Merge service should not store data too. If Blob storage is unreachable for actors or Merge services, it will become reachable in the future and logs will be stored as they are saved in state or at least they could be retrieved from actors/service manually.
If Merge service is unreachable, I would store such event in a poison message queue for later processing, or try to write logs directly to Blob storage but it is a little bit dangerous though chances to write at that moment only to one kind of storage are pretty low.
You could use a Stateful Actor for this. You won't need to worry about concurrency, because there is none. In the state of the Actor you can keep track of which operations were successfully completed. (write 1, write 2)
Still, writing 'exactly once' in a distributed system (without a DTC) is never 100% waterproof.
Some more info about that:
link
link

Resources