S3 presigned URLs are being created for items that do not exist. Is this normal behavior? I would rather know if the item is not going to exist when creating the link, than send users to an error page. Obviously, I can check if the item exists before I create the link, but I'm wondering if I'm doing something wrong.
Yes, this is normal behavior. The pre-signed URL is simply a local calculation and signing of a URL. It has no interaction with the S3 service at all.
If you want to ensure that an object exists before you generate a pre-signed URL for it, then you should head that object.
Note: you can use pre-signed URLs to upload new objects, which obviously don't yet exist at the time you generate the URL. You might also want to use pre-signed URLs to download objects that don't yet exist, but will at some later date (though I admit this is probably not that common a use case).
Related
Well, straight to the point, I want to put a URL, and get all the images inside this URL, for example
www.blablabla.com/images
in this images folder I want to get all the images... I already know how to get a image from an specific URL, but I dunno how to get all of the without having to go straight to the exactly path, is there a way to get a list of all the items inside a URL path or something like that?
Well, basically, this can't be done. Well, not under normal circumstances anyway. The problem is that you don't know what files are in that directory.
...unless the server has "directory listing" on. This is considered a security vulnerability, so the chance this is the case isn't too high. (The idea is that you are exposing details about your server that you don't have to, and while it is no problem on its own, it might make things that can be a security problem known to the world.)
This means that if the server is yours, you can turn directory listing on, or that when the server happens to have it turned on, you can visit the url (www.blablabla.com/images) and see a listing of all the files in that directory. This doesn't always look exactly the same, but in general the common thing is that you will get an html page with links to all the files in the directory. As such, all you would need to do is retrieve the page and parse the links, ending up with the urls to the images you want.
If the server is yours, I would recommend at least looking into any other options you might have. One such option could be to make a script that provides all the urls instead of relying on directory listing. This does not have some of the more unfortunate implications that directory listing has (like showing non-images that happen to be in the same directory) and can be more flexible.
Another way to do this might be to use a protocol different from HTTP like FTP, SFTP or SCP. These protocols do not have the same flexibility as a script, but they do have even more safety as they easily allow you to restrict access to both the directory listing and your images to only people with correct login details (or private keys). (Of course, if such a protocol is available for your use and it's not your own server, you could use them as well.)
I have a working Cocoa app that creates a database file and stores it locally. What I would like to do is store that file on a remove server so that different users of my app at different locations would be sharing the same file. My thought was to store the file on a website or ftp server, such as www.mydomain.com/mydatafile.
Forgetting about issues like two users attempting to access the file simultaneously for the moment, can someone point me to an example of how to property construct the URL to be used?
I'm thinking that it should be a fairly simple process with two parts, the first of which is a cocoa NSURL question, and the second which is really more of a w3 issue:
Create the URL to the file itself, and
Append the username and password require to login to the FTP site.
Any nudges in the right direction would be appreciated!
* edit *
I should mention that the file I would like to be shared by multiple users, is basically several custom objects stored as a file with NSKeyedArchiver...
I suggest you to intgrate your app with some cloud based document storage,sharing,editing service like Google docs/drive.
Until and unless you are going to provide very specific file formats native to your app and are doing something out of ordinary.
Using something like this would save you time, and user wont have to create yet another login-id.
I have a one page javascript(Backbone) frontend running on S3 and I'd like to have a couple of deeplinks to be redirected to the same index file. You'd normally do this with mod_rewrite in Apache but there is no way to do this in S3.
I have tried setting the default error document to be the same as the index document, and
this works on the surface, but if you check the actual response status header you'll see the page comes back as a 404. This is obviously not good.
There is another solution, its ugly but better than the error document hack:
It turns out that you can create a copy of index.html and name it simply the same as the subdirectory(minus the trailing slash), so for example if I clone index.html and name it 'about', and make sure the Content-Type is set to text/html (in the metadata tab) all requests to /about will return the new 'about' which is a copy of index.html.
Obviously this solution is sub-optimal and only works with predefined deeplink targets, but the hassle could be lessened if the step to clone index.html was part of a build process for the frontend. Using Backbone-Boilerplate I could write a grunt task to do just that.
Other than these 2 hacky workarounds I dont see a way of doing this other than resorting to hashbangs..
Any suggestions will be greatly appreciated.
UPDATE:
S3 now (for a while actually) supports Index Documents which solves this problem.
Also if you use Route 53 for your DNS management you can set up an alias record pointing to your S3 bucket, so you dont need a subdomain+cname anymore :)
Unfortunately as far as I know (and I use s3 websites quite a bit) you're right on the money. The 404 hack is a really bad idea as you said, and so you basically have these options:
Use a regular backend of some kind and not S3
The Content-Type work-around
Hashbangs
Sorry to be the bearer of bad news :)
For me, the fact that you can't really direct the root of the domain to S3 websites was the deal breaker for some of my stuff. mod_rewrite-type scenarios sounds like another good example where it just doesn't work.
Did you try redirecting to hash? I am not sure if this S3 feature was available when you asked this question, but I was able to fix the problem using these redirection rules in static web hosting section of folder's properties.
<RoutingRules>
<RoutingRule>
<Condition>
<KeyPrefixEquals>topic/</KeyPrefixEquals>
</Condition>
<Redirect>
<ReplaceKeyPrefixWith>#topic/</ReplaceKeyPrefixWith>
</Redirect>
</RoutingRule>
</RoutingRules>
The rest is handled in Backbone.js application.
I'm trying to wrap my head around what approach I should use to force CDN refreshes of user profile photos on a website where CloudFront is the CDN serving the profile photos, and S3 is the underlying file store.
I need to ensure that user profile photos are up to date as soon as a user updates their profile photos. I see three options that I can do to update profile photos and ensure that website users get the latest image as soon as profile photos are revised. Of these approaches, is one better than the other in terms of ensuring fresh content and maximum long term cost effectiveness? Are there better approaches to ensuring fresh content and maximum long term cost effectiveness?
Issue one S3 put object request to save the file with its original file name, and issue one Amazon CloudFront invalidation request. Amazon CloudFront allows up to 1000 free invalidation requests per month which seems a bit on the low side
Issue one S3delete object request to delete the original photos, then one S3 put object request to save the new photo with a unique, new photo file name. This would be two S3 requests per file update, and would not require a CloudFront CDN invalidation request. CloudFront would then serve the latest files as soon as they were updated, providing image URLs were automatically set to the new file names
Issue one S3 put object request to save the file with its original file name, and then client side append a version code to the CDN URLs (i.e. /img/profilepic.jpg?x=timestamp) or something along that line. I'm not sure how effective this strategy is in terms of invalidating cached CloudFront objects
Thanks
CloudFront invalidation can take a while to kick in and is recomendded as a last resort to remove content that must be removed (like a copyright infringement).
The best approach is you versioned URLs. For profile images I would use an unique ID (such as a GUID). Whenever a user uploads a new photo replace that URL (and delete the old photo if you wish).
When you update your DB with the new ID of the user profile photo CloudFront will pull the new image and the change will be immediate.
My MVC project uses the default location (/Content/...)
So where this code:
<div id="header"style="background-image: url('/Content/images/header_.jpg')">
resolves as www.myDomain.com/content/images/header_.jpg
I'm moving my images files to S3 so now they resolve from 'http://images.myDomain.com' Do I have to convert all the links in the project to that absolute path?
Is there perhaps an IIS7x property to help here?
EDIT: The question seems to boil down to the specifics of working with IIS's Rewrite Module. The samples I've seen so far show how to manipulate the lower ends and query string of a URI. I'm needing to remap the domain end of the URI:
http://www.myDomain.com/content/images/header_.jpg
needs to become:
http://images.myDomain.com/header_.jpg
thx
I'm not sure I understand you correctly. Do you mean
How do I transparently rewrite image urls like http://www.myDomain.com/Content/myImage.png as http://images.myDomain.com/Content/myImage.png at render time?
Or
How do I serve images like http://images.myDomain.com/Content/myImage.png transparently from S3?
There's a DNS trick to answer the second one.
Create the 'images.myDomain.com' bucket, and put your content in it under the '/Content/' path. Since S3 exposes buckets as domains in their own right, you can now get your content with
http://images.myDomain.com.s3.amazonaws.com/Content/myImage.png
You can then create a CNAME record in your own DNS provider taking 'images.myDomain.com' to 'images.myDomain.com.s3.amazonaws.com'
This lets you link to your images as
http://images.myDomain.com/Content/myImage.png
..and yet have them served from S3 (You might also consider a full CDN such as cloud front.)