Why AWS Beanstalk service is using S3 bucket? - amazon-ec2

I've launched an application in AWS -> Beanstalk using pre-installed server template.
In the process of Beanstalk installation I see it is creating S3 bucket. I'm pretty sure that I didn't select any option to use S3 bucket. If S3 bucket is needed for the Beanstalk application, can you tell me how it works together and what is the purpose? Can I prevent using S3 with Beanstalk?

This S3 bucket is indeed automatically created by Elastic Beanstalk for your new application.
It is used to store some environment files, and more important, zipped builds of your app (each one being a different version). The Beanstalk deployment script simply downloads the .zip from the bucket to the EBS volume.
It looks like there is no option on AWS to change this.
By the way, why don't you want to use S3?

Related

Is there a way to copy S3 bucket properties from one bucket to another using the AWS CLI?

The title pretty much sums up my question. I am having no problem copying files from Bucket A to Bucket B, but I would also like to copy Bucket A properties to Bucket B from the cli (ie. setting static web hosting to enabled, or versioning to enabled etc.). Here is the commands I am running right now:
aws s3 mb s3://$S3_NEW_BUCKET_NAME
aws s3 sync s3://$S3_PROD_BUCKET_NAME/ s3://$S3_NEW_BUCKET_NAME/

How application deployed on elasticbeanstalk access files created by another application running on ec2

I have an application(A) deployed on amazon aws using elasticbeanstalk. I also have another multi threaded java application(B), which creates some file on periodic basis, which needs to be read/updated by the application(A) running on elasticbeanstalk.
If i directly run the application (B) on EC2 then Application (A) does not have access to it.
What model should i use in this situation so that Application (A) can access files created by application(B).?
Upload the files created by B to S3, you can do this with the AWS api or use S3 Fuse to mount it in the filesystem. Then have A read them the same way with either the API or S3 Fuse.

sync EBS volumes via S3

I am looking to have multiple Amazon EC2 instances use the same data store. Amazon does not condone mounting an S3 Bucket as a file system, so I am trying to avoid this solution. Is there a way to synchronize an EBS volume with S3 or would it be best to use rsync and cron?
Do you really have to have the files locally available from within EBS? What if instead you served them to yourself via CloudFront, and restricted the permissions so that only your instances (or only your Security Group) could see the files?
Come Fall 2015 you'll be able to use Elastic File Storage (EFS) for this. But until then, I would suppose the next best thing is to use the AWS command-line to sync down from S3 to your volume:
aws s3 sync s3://my-bucket/my/folder /mnt/my-ebs/
After the initial run, that sync command is surprisingly fast. So from there you could just cron it to run hourly or so?

How to access file storage from web application on Amazon EC2

I am in process of hosting a dynamic website on Amazon EC2. I have created the environment and deployed war on ElasticStalkBean. I can connect to mysql database too. But I am not sure how my web application will read/write to the disk and at which path?
As per my understanding, Amazon provides 3 options for file storage
S3
EBS (Persistant)
instance storage
I could upload files on s3 creaing bucket but how can my web application read or write to S3 bucket path on differnt server?
I am not sure how should i upload files or write file to EBS. Connecting to EC2, I cannot cd /dev/sd* directory for my EBS attached to my environment instance. How can I configure my web app to use this as directory for images etc
Instance storage is lost if I stop or recreate env. and is non persistant. So not interested to store files here.
Can you help me on this?
Where to upload file that are read by application?
Where can my application write files?
Your question: "how can my web application read or write to S3 bucket path on different server?
I'm a newbie user of AWS too, so can only offer limited help, but this is what I understand:
The webapp running in the EC2 instance can access the S3 storage using with the REST or SOAP APIs. Here's the link to the reference guide for using the REST GET function to get a file from S3:
GET object documentation
I guess the idea is that the S3 storage bucket that Amazon create for your EBS "environments" provides permanent storage for your application and data files (images etc.). When a EC2 instance is created or rebooted, it should get any additional application files from an S3 bucket and 'cache' them on the file system ("volume") attached to the EC2 "instance".

Preserve content after elastic beanstalk deployment

I have a Symfony2 website running on Amazon EC2 and Elastic Beanstalk. Each time I deploy a git version of my project immediately lost the contents of a folder ("/web/uploads")
Is there a way to tell to Elastic Beanstalk that this content shouldn´t be overriden?
I was thinking on specify an extra command on container_commands parameter inside my .ebextention file but I´m not sure it is the best way to resolve the problem.
You can't preserve that content. You will need to store it externally in a location such as S3, RDS, DynamoDB, etc. The other thing about Elastic Beanstalk to note is that if it scales up another instance of your app it won't have the content you stored locally and won't have access to it. I know it sounds harsh and limiting, but having an automated config/deploy is less trouble in the long run, IMHO. ;)

Resources