Amazon EC2 and Prestashop - amazon-ec2

We are facing serious problem with Amazon EC2 and Prestashop.
We deployed Prestashop with Amazon Elastic Beanstalk and we setup S3 for media servers. When we are uploading products with Images in bulk with csv import feature, we are facing the below problems.
New EC2 instance is getting created and loosing all the css and js files (cache) and the media servers in the database are getting emptied. Due to this again we need to generate all the css and js files and upload to S3 server every time, because the previously generate css and js are now useless.
While downloading the images, if the new EC2 instance is created, loosing the images too.
Kindly help us for better solution for the above problems.
Best Regards,

Related

AWS temporary files before uploading to S3?

My Laravel app allows users to upload images. Currently, when the user uploads their images, they are stored in a temporary location on the server. A cron job then modifies the uploaded images (compresses them, etc.), and uploads them to S3. Any temporary files older than 48 hours that failed to upload to S3 are deleted by another cron job.
I've set up an Elastic Beanstalk environment, but it's occurred to me that storing uploaded images in a temporary directory on an instance is risky because instances can be created and destroyed when necessary.
How and where, then, would I store these temporary files so that they're not at risk of being deleted by an instance?
As discussed in the comments, I think that uploading the file to S3 is the best option. As far as I know, it's not possible to stop Elastic Beanstalk from destroying an ec2 instance, unless you want to get rid of all of the scaling and instance failure/autoreplacement features.
One option I don't know much about may be AWS EBS. "Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud." I don't have any direct experience with EBS, the overriding question of course would be if EBS is truly persistent, even after an ec2 instance is destroyed. As EBS has costs associated with it, it seems like since you are already using S3, S3 would be the way to go.
S3 has a feature called object lifecycle management you can use to have files deleted automatically by setting them them to expire 2 days after they're uploaded.
You can either:
A) Prefix the temporary files to put them in an S3 psuedo-folder (i.e., Temp/), apply the object lifecycle expire rule to that specific prefix (or "folder"), and use the files in there as a source of truth for the new files derived from it post-manipulation.
or
B) Create an S3 bucket specifically for temporary files. Manipulate the files from there and copy to the production bucket.

List all files cached by amazon cloudfront

We just started using AWS. All our images are stored on the amazon S3 Server. Now we're making thumbnails using an ec2 server with thumbor.
We added cloudfront to the ec2, to cache the images on cloudfront's edge locations, but it looks like this doesn't work. (image loading times up to 30 seconds). So, is there a way to see the cached files in the edge locations? Or to confirm that cloudfront isn't working?

Scale Magento on AWS Elastic Beanstalk

I have looked in the Magento & AWS Documentation but that didn't really help.
I've installed Magento on Elastic Beanstalk with 1 instance, I proceeded to snapshot the volume and make an AMI and changed the AMI setting in Elastic Beanstalk. Then spun up 2 more instances in the other availability zones. They went back to the Magento installation pages.
How do I fix this? I thought the AMI made from the snapshot would of captured the DB and other files created on installation. Meaning they'd just connect the DB and run.
Cheers to anyone who helps!
You probably don't want your database installed on your EC2 inside the elastic beanstalk. Since elastic beanstalk just removes the instance when there is an error on it and spawns a new one. You then end up loosing data. Besides that you need 1 single database server, not a database server on each of the EC2 instances inside the elastic beanstalk.
You want a separate database server, I personally always use Amazon RDS for this since this is made for this purpose.
When you are getting the install page it probably means Magento cannot locate your app/etc/local.xml. Since AWS normally gets your files from git and it is best practice to not have your local.xml in version control you are probably missing this one?
Hope that I pointed you in the right direction.
Keep in mind that the database is just your first challange, next things you are going to need to handle is:
Sessions (database is a good option but I use AWS ElastiCache with
Redis)
Cache (again, AWS ElastiCache with Redis)
Media Storage (I use S3 with S3FS and CloudFront)
CDN (CloudFront)
Here are some sites that have helped me setting up my first elastic beanstalks with Magento:
http://www.aschroder.com/2013/04/actually-running-magento-on-amazons-elastic-beanstalk-cloud-platform/
http://www.slideshare.net/corleycloud/scale-your-magento-app-with-elastic-beanstalk

How to access file storage from web application on Amazon EC2

I am in process of hosting a dynamic website on Amazon EC2. I have created the environment and deployed war on ElasticStalkBean. I can connect to mysql database too. But I am not sure how my web application will read/write to the disk and at which path?
As per my understanding, Amazon provides 3 options for file storage
S3
EBS (Persistant)
instance storage
I could upload files on s3 creaing bucket but how can my web application read or write to S3 bucket path on differnt server?
I am not sure how should i upload files or write file to EBS. Connecting to EC2, I cannot cd /dev/sd* directory for my EBS attached to my environment instance. How can I configure my web app to use this as directory for images etc
Instance storage is lost if I stop or recreate env. and is non persistant. So not interested to store files here.
Can you help me on this?
Where to upload file that are read by application?
Where can my application write files?
Your question: "how can my web application read or write to S3 bucket path on different server?
I'm a newbie user of AWS too, so can only offer limited help, but this is what I understand:
The webapp running in the EC2 instance can access the S3 storage using with the REST or SOAP APIs. Here's the link to the reference guide for using the REST GET function to get a file from S3:
GET object documentation
I guess the idea is that the S3 storage bucket that Amazon create for your EBS "environments" provides permanent storage for your application and data files (images etc.). When a EC2 instance is created or rebooted, it should get any additional application files from an S3 bucket and 'cache' them on the file system ("volume") attached to the EC2 "instance".

amazon ec2 and s3 setup

I am about to migrate a large web project (many sites using common data) to EC2 and i wondered what would be the best setup (I am very much a newbie with Amazon AWS).
The site pages are rebuilt by scripts once a week and the resultant static pages are served (currently about 7 to 10k views a day). Inbetween the weekly builds I would like to access the db to add/edit data.
I am thinking either EC2 + RDS or EC2 and S3 (S3 having the advantage of keeping a copy of the static pages too). Do these options sound reasonable, based on what I have mentioned?
Thanks in advance
We're using EC2 (experimtented with a few instance types just to learn cpu extra large worked best for our type of application), and rather than using RDS we extensively use EBS -
one EBS for running code, one EBS which holds mysql database files.
S3 is used for incremental backups mostly- as the EBS can be mounted on any other instance easily.

Resources