How to connect microservice to SQS/SNS - amazon-ec2

I have a java microservice that runs in a Docker container in a Ec2 instance .
It has to get notified when a file is dropped in a S3 bucker. We have a SNS and SQS that is connected to the S3 bucket. How can i connect the microserice to the SNS/SQS ? If there is a better way to get the java microservice get notified when the files is dropped into S3 bucket please let me know ?

The AWS SDK for Java is pretty good.
You can either:
write an HTTP endpoint that SNS can post to (see http://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.example.java.html)
or
subscribe to an SQS topic (see https://github.com/aws/aws-sdk-java/blob/master/src/samples/AmazonSimpleQueueService/SimpleQueueServiceSample.java).

Yes, this is one use case of AWS Lambda:
As an event-driven compute service where AWS Lambda runs your code in
response to events, such as changes to data in an Amazon S3 bucket or
an Amazon DynamoDB table.
http://docs.aws.amazon.com/lambda/latest/dg/welcome.html
Since it runs your code, you are free to write something that places a request to a microservice.

Related

How to achieve transaction across an AWS S3 upload and an ElasticSearch update?

Problem
Is there a way to achieve transactionality between S3 and another database like ElasticSearch?
What I'm trying to do is to upload an object to S3 and save his identifier to ElasticSearch in an atomic way.
For the backend where logic exists, we are using Java with Springboot.
From AWS docs
I saw that this is a common pattern recommended by AWS, but they mention that you need to handle on our own the failures:
"You can also store the item as an object in Amazon Simple Storage Service (Amazon S3) and store the Amazon S3 object identifier in your DynamoDB item."
"DynamoDB doesn't support transactions that cross Amazon S3 and DynamoDB. Therefore, your application must deal with any failures, which could include cleaning up orphaned Amazon S3 objects."
Ref: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-use-s3-too.html

Setup Terraform stackdriver alerts based on GCP bucket

I am trying to setup stack driver alerting policies thru terraform, and based on cloud storage bucket conditions.
So whenever there is a file in the GCP bucket, It should trigger a mail notification to our mails (Not using SendGrind).
For now, I got this mail notification working thru GCP console via stack-driver. But I am trying to incorporate it using terraform.
Any guidance is really appreciated. Thank you
Figured out thru terraform import google monitoring policies. Incorporated all thru terraform and managed to hook up notifications sent thru stackdriver over bucket changes as well.

How to use Lambda with S3 events in AWS

I came across this question in my AWS study:
A user is designing a new service that receives location updates from
3600 rental cars every second. The cars location needs to be uploaded
to an Amazon S3 bucket. Each location must also be checked for
distance from the original rental location. Which services will
process the updates and automatically scale?
Options:
A. Amazon EC2 and Amazon EBS
B. Amazon Kinesis Firehose and Amazon S3
C. Amazon ECS and Amazon RDS
D. Amazon S3 events and AWS Lambda
My question is how can Option D be used as the solution? or should I use Firehose to ingest (capture and transform) data into S3?
Thanks.
I would choose Option B
a.)This provides a service to ingest data directly to S3
b.) Firehose would have Lambda transformation which can compute the distance from the original location and that can be stored in S3

Terraform: cloudwatch logs to elasticsearch

I am trying to push the cloudwatch logs to elastic search either using a Lambda function or Amazon Kinesis. I have the log groups setup and the elastic search domain running using terraform. Please suggest on how can I push the logs from the log group to elastic search. Please share if you have the terraform codes for the same.
This answer documents some example Terraform code for creating a lambda and Cloudwatch subscription that ships logs from a Cloudwatch log group to a Sumologic HTTP collector (just a basic HTTP POST endpoint). The Cloudwatch subscription invokes the Lambda every time a new batch of log entries is posted to the log group.
The cloudwatch-sumologic-lambda referred to in that Terraform code was patterned off of the Sumologic Lambda example.
I'd imagine you would to do something similar, but re-writing the Lambda to format the HTTP however ElasticSearch requires. I'd bet some quick googling on your part will turn up plenty of examples.
Alternatively to all this Terraform config though, you can just go to your Cloudwatch console, select the log group you're interested in and select "Stream to Amazon ElasticSearch".
Though I think that will only work if you're using the AWS "ElasticSearch service" offering - meaning if you installed/configured ElasticSearch on some EC2 instances yourself it probably won't work.

Cloud Services to run Batch script when file is uploaded?

I am looking to run a batch script on files that are uploaded from my website (one at a time), and return the resulting file produced by that batch script. The website is hosted on a shared linux environment, so I cannot run the batch file on the server.
It sounds like something I could accomplish with Amazon S3 and Amazon Lambda, but I was wondering if there were any other services out there that would allow me to accomplish the same task.
I would recommend that you look into S3 Events and Lambda.
Using S3 events, you can trigger a lambda function on puts and deletes in a S3 bucket and depending on your "batch file" task you may be able to achieve your goal purely in Lambda.
If you cannot use Lambda to replace the functionality of your batch file you can try the following:
If you need to have the batch process run on a specific instance, take a look at Amazon SQS. You can have the S3 event triggered Lambda create a work item in SQS and your instance can regularly poll SQS for work to process.
If you need something a bit more real time, you could use Amazon SNS for a push rather than pull approach to the above.
If you don't need the file to be processed by a specific instance but you have to have a batch file run against it, perhaps you can have your S3 event triggered Lambda create an instance that has a UserData script that will sys prep the server as needed, download the s3 file, process the batch against it and then finally self terminate by looking up it's own instance ID via the EC2 Metadata service and calling the api method terminate instances.
Here is some related reading to assist with the above approaches:
Amazon SQS
https://aws.amazon.com/documentation/sqs/
Amazon SNS
https://aws.amazon.com/documentation/sns/
Amazon Lambda
https://aws.amazon.com/documentation/lambda/
Amazon S3 Event Notifications
http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
EC2 UserData
http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-instance-metadata.html#instancedata-add-user-data
EC2 Metadata Service
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval
AWS Tools for Powershell Cmdlet Reference
http://docs.aws.amazon.com/powershell/latest/reference/Index.html

Resources