We just started using AWS. All our images are stored on the amazon S3 Server. Now we're making thumbnails using an ec2 server with thumbor.
We added cloudfront to the ec2, to cache the images on cloudfront's edge locations, but it looks like this doesn't work. (image loading times up to 30 seconds). So, is there a way to see the cached files in the edge locations? Or to confirm that cloudfront isn't working?
Related
I want to upload thousands of images from my Digital Ocean Droplet to my S3 Bucket, i already create a piece of code that upload an crop all new images from my site to the bucket, so now that is working fine i just want to move all of my images from my production droplet to the bucket.
I have stored 52 GB on images so i dont know how to move all of my images to the bucket! what will be the best approach?
The best approach will be :
Create a Zip file of images you want to transfer.
Create an EC2 Instance in the same region as the Bucket is.
Copy the Zip file to EC2 Instance.
Unzip the Zip file in EC2 Instance.
Use aws cli to copy the Images from EC2 Instance to the Bucket.
The other approach, is to use aws cli directly from the Droplet, but due to large number of files, it'll take a lot of time to transfer.
In aws cli you can either use aws s3 cp or aws s3 sync to copy your images.
I have a Magento running on EC2 Instance running along with S3 and Cloudfront. I want to know when ever i make changes in any file in EC2 instance which also have a copy in S3 bucket, how can I make it auto update or force update so it can reflect changes in front-end. I know about the Invalidations in Cloudfront. I don't have any idea about update from EC2 instance to S3 bucket.
For example, if I make changes to style.css in EC2 instance I want to get it updated or force update to S3 bucket and then I can add the file name in Invalidations in Cloudfront to appear in my front-end.
I'm working with EMR (Elastic MapReduce) on AWS infrastructure and the default way to provide input files (large datasets) for programs is to upload them to an S3 bucket and reference those buckets from within EMR.
Usually I download the datasets to my local,development machine and then upload them to S3, but this is getting harder to do with larger files, as upload speeds are generally much lower than download speeds.
My question is is there a way to download files from the internet (given their URL) directly into S3 so I don't have to download them to my local machine and then manually upload them?
No. You need an intermediary- typically, an EC2 instance is used, rather than your local machine, for speed.
We are facing serious problem with Amazon EC2 and Prestashop.
We deployed Prestashop with Amazon Elastic Beanstalk and we setup S3 for media servers. When we are uploading products with Images in bulk with csv import feature, we are facing the below problems.
New EC2 instance is getting created and loosing all the css and js files (cache) and the media servers in the database are getting emptied. Due to this again we need to generate all the css and js files and upload to S3 server every time, because the previously generate css and js are now useless.
While downloading the images, if the new EC2 instance is created, loosing the images too.
Kindly help us for better solution for the above problems.
Best Regards,
Me and my team are creating a mobile game in which a map is available. We store json information in multiple files - each file represents a tile on the map. To render the map, we download the files and process them to create streets, buildings etc.
I want to choose the best way to download tile files to the mobile devices but I didn't have the possibility to do this test on the mobile devices so i used a browser and node js scripts.
I used a 100KB json file. Uploaded it on an S3 bucket and on EC2 storage. I wrote a few node scripts to connect to the S3 or EC2:
GET request from Node js local script to S3 bucket (bucket.zone.amazonaws.com/file) - ~650ms
GET request from Node js local script to Node js server run on EC2 instance which connects to S3 - ~1032ms
GET request from Node js local script to Node js server run on EC2 instance which loads the file from local storage - ~833ms
The difference between the last two values is actually the time added for EC2 instance to access the file from the bucket. And the reason for making a request to S3 from EC2 is that I know that connections between AWS services is really fast.
The other test I made was from the browser (Firefox):
Directly accessed the S3 bucket (bucket.zone.amazonaws.com/file) - ~624ms with values between 400ms and 1000ms
Through the Apache server on EC2 (domain/file) - ~875ms with values between 649ms and 1090ms
Through Node js server which connects to S3 bucket (run on EC2) (domain:port) - ~1014ms with values between 680ms and 1700ms
Through Node js server which loads the file from local storage (run on EC2) (domain:port) - ~ 965ms with values between 600ms and 1700ms
My question is why is such a big difference between accessing the file from the browser and accessing it through Node script?
For logging the times, I made each request for 10 times and I did the mean of times.
EC2 instance is micro, in Ireland. The bucket is situated in Ireland too.
I propose several clue that may help you to profile.
Cache, when you use script to get the json data. The cache mechanism would not work. While in browser, it will honor the cache header and may fetch from the cache, thus decrease the speed.
GZip header, I think you would not enable gzip to compress your data in nodejs server. I am not sure if you configured such on Apache. Imagine a json file with 100k, if it is compressed, the transfer time would surely be decreased.
Thanks
So i think this question has no more sense since the times are more even after hard refreshing the page.