how to browse images on aws s3 - image

I have AWS S3 buckets full of images and was wondering how I could browse them without needing to download the whole batch first. Is there a way that I can pipe them into feh through awscli or some other method?

Amazon S3 can act as a web server for your images. However, it simply serves the image files when they are requested. You will need to write an application or web page that incorporates those images into a form suitable for viewing.
For example, you could list the files in the Amazon S3 bucket and then convert them into an HTML page with lots of <img src=... /> tags. The web browser would then download the images from S3 and insert them into the web page in your browser.
If you are looking for a full-featured photo management app, try services like Prime Photos from Amazon or SmugMug. They've done all the hard work for you.

Related

Google App Engine & Images server

I'm having difficulties understanding if my idea of an image gallery will work as I can't seem to get it working.
What I have:
An Google App Engine running with a simple website that serves products where each product can have images
A Google Storage bucket with 1.000.000's images
What I planned to do:
Add a CDN & Load balancer to the Google Storage bucket to serve the images worldwide fast on a subdomain.
Status: This works. At least it serves the images.
Problems:
But I have the feeling that the architecture is not right as the Google App Engine can't be put behind the same load balancer & CDN to serve all the static content via this CDN. And I see no way to add the content caching headers. The documentation of Google says I should be able to add cache keys in the loadbalancer config. But I've been 10 times through this config and the back-end bucket config but no luck to find any. Also in the app.yaml of the Google App Engine you can't set this as the images are not servered via the App Engine....
So questions:
Is it logical in this setup to have a GAE and a separate load
balancer with a storage bucket with the images?
How do I add cache-control headers to the CDN/bucket config of Google Cloud CDN?
Assuming that the GCS bucket setup you already have in place allows you to serve an image via the CDN & Load balancer as you desire, let's say on a URL like gs://www.example.com/img.png then handling such request will already include all the required cache control.
If so then in your GAE app-provided pages, instead of referencing an image via a relative path to your site, like <img src="/static/img.png">, which would indeed require handling its own cache management inside the GAE app code, you could simply reference the image via its corresponding URL in the existing CDN setup: <img src="gs://www.example.com/img.png">, with all cache control already included.
Note: I didn't actually try it (I don't have such GCS CDN setup), but I see no reason for which this wouldn't work.

Delete Ghost images uploaded on AWS S3

I'm building a Ghost application and it will be hosted on Heroku that can not store image uploads so what I did is to search around on Google and I found a way to upload images directly to Amazon buckets.
After a 4 hours fight I've been able to upload and read images from the S3 to the Ghost blog but what I wish also to achieve is to delete an image I'm not using anymore.
Let's say I upload an image, I don't like it and I replace it with another. The S3 keep both images.
There is a way to delete both images from Ghost and S3?
I'm using ghost-s3-storage. Thanks

Amazon S3 images for web site[Advice needed]

My hosting has very limited memory space, and I want part of my images(or all of them) be saved at amazon S3 service.
For example, User uploads a picture at site. I take original image, create 5 different thumbnails with it, send original image to S3 with putObject function, and delete it locally. Then user wants to edit his picture, lets say rotate it. I will download original image, rotate it, re-generate 5 thumbnails from it, and put again to S3?
What if i want one of my generated images thumbnails be at Amazon S3, this image is rarely viewed(lets say it is a big portrait, and served only on click at user profile). Do i download image to machine, send it to page, and delete it? Will this approach be fast? Is there better logic for this part?
Struggle for advices.
There is no reason you can't store all your images on S3 - the original and all of the generated thumbnails.
The host receives the uploaded images, generates the required thumbnails and the PUTs them all to S3. Your web application then references all of your images directly from the S3 location - there is no need to download them to your host in order to show them on your website.
Serving your images (and in fact all of your static content - i.e. css files and js files) from S3 will in all likelihood speed up the the page load of your website and if you need a further performance boost, with just a few clicks of the mouse you can use AWS Cloudfront to push your s3 files to geographically dispersed edge locations around the globe to get those items 'closer' to your intended users.

Why does Google use base64 encoded data as image src attributes?

All times in my life, I save images on my server as files:
the originals
the thumbnails
the original with watermarks
... all as files in folders.
But today, I'm viewing google images, and the src of images is a base64 encoded hash. What benefit does Google get from serving images in this manner? Why would someone do that instead of just serving images conventionally?
google is sort of obsessed with latency; latency for the page load goes up if your browser has to go and make a separate request to the web server for every image on the page. you can eliminate this latency by writing the data of the image right into the page when you generate the page. i actually see a lot of image-heavy sites, especially blogs, using this technique nowadays.
just because the image is included in the page doesn't necessarily mean it's not stored as files on the web server -- just that the web server process that generated the page has already opened and read the image file and then wrote it's data into the page. google is probably storing the images in it's proprietary and secret data store, but you don't have to.

Best way to store 1000s of small images (<5k) either in MongoDb or S3?

I'm planning on storing 1000s (hopefully even millions some day) of profile images from facebook and twitter. Their usual size is less than 5k.
What is the best way to do this either in MongoDB or on Amazon S3 and avoid disk fragmentation or similar issues?
Any pointers/tips on the do's and don'ts would be very helpful as well.
Yeah, publish profile images from the associated Social site (Facebook, Twitter, etc), but if you have to store uploaded images onto S3, rather than reading the file (from S3) and re-stream it to your user, you can enable the "Website" feature and have your images linked to S3 directly.
So your html image tag will be like:
<img src="http://<amazon s3 - website - endpoint>/<image filename>" title="something">
Why not just store the usernames instead? The profile image can be accessed via the Facebook Graph API (just replace "username" with any Facebook user's username). You'll also save the work of keeping the profile pictures updated.
<img src="http://graph.facebook.com/username/picture" />

Resources