Getting images from a URL to a directory - image

Well, straight to the point, I want to put a URL, and get all the images inside this URL, for example
www.blablabla.com/images
in this images folder I want to get all the images... I already know how to get a image from an specific URL, but I dunno how to get all of the without having to go straight to the exactly path, is there a way to get a list of all the items inside a URL path or something like that?

Well, basically, this can't be done. Well, not under normal circumstances anyway. The problem is that you don't know what files are in that directory.
...unless the server has "directory listing" on. This is considered a security vulnerability, so the chance this is the case isn't too high. (The idea is that you are exposing details about your server that you don't have to, and while it is no problem on its own, it might make things that can be a security problem known to the world.)
This means that if the server is yours, you can turn directory listing on, or that when the server happens to have it turned on, you can visit the url (www.blablabla.com/images) and see a listing of all the files in that directory. This doesn't always look exactly the same, but in general the common thing is that you will get an html page with links to all the files in the directory. As such, all you would need to do is retrieve the page and parse the links, ending up with the urls to the images you want.
If the server is yours, I would recommend at least looking into any other options you might have. One such option could be to make a script that provides all the urls instead of relying on directory listing. This does not have some of the more unfortunate implications that directory listing has (like showing non-images that happen to be in the same directory) and can be more flexible.
Another way to do this might be to use a protocol different from HTTP like FTP, SFTP or SCP. These protocols do not have the same flexibility as a script, but they do have even more safety as they easily allow you to restrict access to both the directory listing and your images to only people with correct login details (or private keys). (Of course, if such a protocol is available for your use and it's not your own server, you could use them as well.)

Related

Include source code from different directory

I have three different domains all on the same server and I want to run the code on all three domains from one source on the same server, but not sure the best way.
Here's what I have:
domain01.com
domain02.com
domain03.com
domain04.com/sourcecode
I want domain01-03 to run the code inside domain04.com/sourcecode so the user can go to their domain and not have to go to domain04.com to see their site. I want to keep all the code inside domain04.com because I don't want to have to put the code inside each domain every time I make a code change.
For whatever reason I can't get my head around the best way to do this -- and want to do it right.
Any advice?
Thanks!
All you need to do is create a mapping on the first three sites to the appropriate directory in the fourth site, eg map /domain04 to /full/path/to/domain04/sourcecode, then refererence its CFML resources via /domain04 in CFC and include paths. The inference here is the code does need to be accessible via the file system for all sites concerned.
Note that if you also want to server non-CFML files via HTTP (eg: images, css, js), then you will also need a web server virtual directory along the same lines.
None of this requires a framework, it's standard CF / web server functionality.
Are you using a framework? One like ColdBox could make this trivial if your code is written modularly. (Disclaimer, I am affiliated with ColdBox)
If not, it really depends on what the code is. CFCs can be mapped anywhere via ColdFusion mappings. Even .cfm files can be included as long as the file systems are visible. If you're wanting to basically have complete copy of a site in another web root without duplication, I would first consider using a shared source control repo and a build process that checks it out in the appropriate places, and secondly a good old, symlink will also work .

How can I set up custom ImageResizer urls?

I'm just getting started with ImageResizer and I'm stuck on what seem like totally basic questions:
I have an uploader that I use to put images into a directory that's not directly accessible over HTTP. (If I just put a image at, say, /images/myimage.jpg, then anyone could access it by just asking for it, whereas I want to limit access via thumbnails, watermarks, etc.). So I want to put it at /offlimits/myimage.jpg, but be able to serve it up at /public/images/myimage.jpg.
I don't really want to dump all the images in the same offlimits folder, because putting lots of files in one folder makes Windows unhappy. But I don't want to expose the details of that subdirectory structure either, so where do I put the mapping between the public facing url and the actual image location?
Most generally, I don't necessarily want an image extension at all, so I'd like to say /public/image_id?width=100... and have this map to /offlimits/sub1/sub2/sub3/image_id.jpg.
Can anyone advise about how to set this up?
Three part questions are generally frowned upon here at SO, but I'll bite anyhow :)
If you're allowing access to images based on authentication, then you need to use ASP.NET's URL Authorization feature. ImageResizer supports URL Authorization rules. If you just don't want the source files available, and want to force them resized or watermarked, read the docs on how to implement arbitrary rules like this.
You can rewrite image paths to your heart's content with Config.Current.Rewrite, which works just like the PostRewrite event mentioned earlier. Just remember you'll have to keep it all straight in your head later.
Image extensions are good things. Don't fight them. They let the server figure out the right mime-type to send and help errant browsers recover from related bugs. They prevent issues on several platforms and make the Save As dialog work. They significantly improve server efficiency as well, since handling logic doesn't have wait as long. This is particularly relevant because of the design of the IIS/ASP.NET modules system.

Does your average web user understand the concept of the clipboard?

I'm designing a web site where I would expect the intended audience to have limited computer skills.
An important part of my site's functionality will require the end user to copy an URL that my site generates and use it in emails, or social network postings, or on their own web site.
I could write the URL to the clipboard when a button is pushed (like tinyurl.com does). However I'm wondering whether the average user even understands what that means and how to use it.
Any guidence will be appreciated.
There are several paths of though possible, you can make many of the scenarios possible at once.
Make the URL copied at once.
Make the URL copied when pushing the button.
Make the URL selected at once so the user can copy it directly
Of course since the URL is copied at once the two other actions does not need to do anything with the clipboard unless the user has modified the clipboard after the URL was presented.
When the URL is copied, at least in the first two steps you could show a message for the user that is has been copied, that makes it useful for those that understand it.
For those that might be focused on copying the URL themselves step 3 helps them on their way.
Whatever path the user tries should go as smooth as possible without forcing them to understand the benefits offered. In the long run it might help them, but if only used a few times the work of understanding what is going on outweighs the benefit.

Prevent direct-linking to .zip files

I'ld like to prevent direct-linking to .zip files I offer for download on my website.
I'm reading posts for hours now but I'm not sure which method is the best to achieve that. PHP seems not to be safe and htaccess refferer can be empty etc.
Which method do you guys use or would suggest?
Cheers
See: http://www.alistapart.com/articles/hotlinking/
and: http://www.webmasterworld.com/forum92/2787.htm
Referrer checking is one option, but as you noted they can be empty or spoofed.
Another possibility is to set a cookie when someone visits normal pages on your site, and check for that when the person tries to download the zip file. This could be gotten around (e.g. by the hot-linker embedding an appropriate cookie-setter page as a 1x1 image along size the hot link), but it's less likely they'll figure it out. It'll also exclude people who block cookies, of course.
Another possibility is to generate limited-time-access URLs on the download page, something along the lines of http://example.com/download.php?file=file.zip&code=some-random-string-here. The link would only be usable for a small number of downloads and/or a short period of time, after which it would no longer function.

How to avoid occasional corrupted downloads

My website hosts a msi file that users need to download. There is nothing special about the file. It lives in a directory on the webserver with a regular HREF pointing to it that users click on. Occasionally a user will complain that they can't open the msi file because Windows Installer claims the file is corrupt. Redownloading the file doesn't help. I end up emailing the file as an attachment which usually works.
My guess is that the file is either corrupted in the user's browser cache or perhaps an intermediary proxy's cache which the user goes through.
Why does this happen? Is there a technique / best practice that will minimize chances of corruption or, perhaps make sure users will get a fresh copy of the file if it does get corrupted during download?
Well if the cause is really just the cache, then I think you could just rename the file before having them download it again. This would work for any proxies too.
Edit: Also, I believe most browsers won't cache pages unless the Get and Post parameters remain the same. The same probably applies to any URL in general. Try adding a unique get (or post) parameter to the end of the URL of each download. You could use the current time, or a random number, etc. Rather than a hyperlink, you could have a button that, when clicked, submits a form with a unique parameter to the download URL.
My advice would be:
Recommend users avoiding IE (especially the older versions), because of truncated downloads, cache pollution...
Advice user to clear the cache before re-downloading the files.
Host the file on an FTP instead of HTTP
Provide MD5 checksum for user to verify the download.

Resources