I'm just getting started with ImageResizer and I'm stuck on what seem like totally basic questions:
I have an uploader that I use to put images into a directory that's not directly accessible over HTTP. (If I just put a image at, say, /images/myimage.jpg, then anyone could access it by just asking for it, whereas I want to limit access via thumbnails, watermarks, etc.). So I want to put it at /offlimits/myimage.jpg, but be able to serve it up at /public/images/myimage.jpg.
I don't really want to dump all the images in the same offlimits folder, because putting lots of files in one folder makes Windows unhappy. But I don't want to expose the details of that subdirectory structure either, so where do I put the mapping between the public facing url and the actual image location?
Most generally, I don't necessarily want an image extension at all, so I'd like to say /public/image_id?width=100... and have this map to /offlimits/sub1/sub2/sub3/image_id.jpg.
Can anyone advise about how to set this up?
Three part questions are generally frowned upon here at SO, but I'll bite anyhow :)
If you're allowing access to images based on authentication, then you need to use ASP.NET's URL Authorization feature. ImageResizer supports URL Authorization rules. If you just don't want the source files available, and want to force them resized or watermarked, read the docs on how to implement arbitrary rules like this.
You can rewrite image paths to your heart's content with Config.Current.Rewrite, which works just like the PostRewrite event mentioned earlier. Just remember you'll have to keep it all straight in your head later.
Image extensions are good things. Don't fight them. They let the server figure out the right mime-type to send and help errant browsers recover from related bugs. They prevent issues on several platforms and make the Save As dialog work. They significantly improve server efficiency as well, since handling logic doesn't have wait as long. This is particularly relevant because of the design of the IIS/ASP.NET modules system.
Related
I have three different domains all on the same server and I want to run the code on all three domains from one source on the same server, but not sure the best way.
Here's what I have:
domain01.com
domain02.com
domain03.com
domain04.com/sourcecode
I want domain01-03 to run the code inside domain04.com/sourcecode so the user can go to their domain and not have to go to domain04.com to see their site. I want to keep all the code inside domain04.com because I don't want to have to put the code inside each domain every time I make a code change.
For whatever reason I can't get my head around the best way to do this -- and want to do it right.
Any advice?
Thanks!
All you need to do is create a mapping on the first three sites to the appropriate directory in the fourth site, eg map /domain04 to /full/path/to/domain04/sourcecode, then refererence its CFML resources via /domain04 in CFC and include paths. The inference here is the code does need to be accessible via the file system for all sites concerned.
Note that if you also want to server non-CFML files via HTTP (eg: images, css, js), then you will also need a web server virtual directory along the same lines.
None of this requires a framework, it's standard CF / web server functionality.
Are you using a framework? One like ColdBox could make this trivial if your code is written modularly. (Disclaimer, I am affiliated with ColdBox)
If not, it really depends on what the code is. CFCs can be mapped anywhere via ColdFusion mappings. Even .cfm files can be included as long as the file systems are visible. If you're wanting to basically have complete copy of a site in another web root without duplication, I would first consider using a shared source control repo and a build process that checks it out in the appropriate places, and secondly a good old, symlink will also work .
Well, straight to the point, I want to put a URL, and get all the images inside this URL, for example
www.blablabla.com/images
in this images folder I want to get all the images... I already know how to get a image from an specific URL, but I dunno how to get all of the without having to go straight to the exactly path, is there a way to get a list of all the items inside a URL path or something like that?
Well, basically, this can't be done. Well, not under normal circumstances anyway. The problem is that you don't know what files are in that directory.
...unless the server has "directory listing" on. This is considered a security vulnerability, so the chance this is the case isn't too high. (The idea is that you are exposing details about your server that you don't have to, and while it is no problem on its own, it might make things that can be a security problem known to the world.)
This means that if the server is yours, you can turn directory listing on, or that when the server happens to have it turned on, you can visit the url (www.blablabla.com/images) and see a listing of all the files in that directory. This doesn't always look exactly the same, but in general the common thing is that you will get an html page with links to all the files in the directory. As such, all you would need to do is retrieve the page and parse the links, ending up with the urls to the images you want.
If the server is yours, I would recommend at least looking into any other options you might have. One such option could be to make a script that provides all the urls instead of relying on directory listing. This does not have some of the more unfortunate implications that directory listing has (like showing non-images that happen to be in the same directory) and can be more flexible.
Another way to do this might be to use a protocol different from HTTP like FTP, SFTP or SCP. These protocols do not have the same flexibility as a script, but they do have even more safety as they easily allow you to restrict access to both the directory listing and your images to only people with correct login details (or private keys). (Of course, if such a protocol is available for your use and it's not your own server, you could use them as well.)
I'm designing a web site where I would expect the intended audience to have limited computer skills.
An important part of my site's functionality will require the end user to copy an URL that my site generates and use it in emails, or social network postings, or on their own web site.
I could write the URL to the clipboard when a button is pushed (like tinyurl.com does). However I'm wondering whether the average user even understands what that means and how to use it.
Any guidence will be appreciated.
There are several paths of though possible, you can make many of the scenarios possible at once.
Make the URL copied at once.
Make the URL copied when pushing the button.
Make the URL selected at once so the user can copy it directly
Of course since the URL is copied at once the two other actions does not need to do anything with the clipboard unless the user has modified the clipboard after the URL was presented.
When the URL is copied, at least in the first two steps you could show a message for the user that is has been copied, that makes it useful for those that understand it.
For those that might be focused on copying the URL themselves step 3 helps them on their way.
Whatever path the user tries should go as smooth as possible without forcing them to understand the benefits offered. In the long run it might help them, but if only used a few times the work of understanding what is going on outweighs the benefit.
Not long ago I came across this website: http://www.danasoft.com/
This websites provides dynamically updating signatures which are pretty cool in my opinion.
There is just one thing that I don't get and would really like to know how to do.
Here's a direct link to an image on the website: http://www.danasoft.com/vipersig.jpg Try refreshing. Notice it changes? How do I achieve that? How do I have a direct link to a file like www.mypage.com/thing.jpeg output different images each time?
Basically, the URL is not actually retrieving the file directly each time, but rather the server is intercepting that URL and serving a (possibly random) image from a larger set of images. Depending on whether the server is running Apache, IIS, etc, the implementation could vary... This could also probably be achieved with the MVC routing engine by defining a custom route handler for URLs ending in '.jpg', but I'm not actually sure.
EDIT:
See this discussion for more detail on the MVC implementation.
We have members-only paid content that is frequently copied and republished without our permission.
We are trying to ‘watermark’ our content by including each customer’s user id in a fake css class, for example <p class='userid_1234'> (except not so obivous, of course :), that would help us track the source of the copying, and then we place that class somewhere in the article body.
The problem is, by including user-specific information into an article, it makes it so that the article content is ineligible for caching because it is now unique to each user.
This bumps the page load time from ~.8ms to ~2.5sec for each article page view.
Does anyone know of any watermarking strategies that can still be used with caching?
Alternatively, what can be done to speed up database access? ( ha, ha, that there’s just a tiny topic i’m sure.. )
We're using the CMS Expression Engine, but I'd like to hear about any strategies. They don't have to be EE-specific.
If you're talking about images then you could use PHP to add a watermark to the images.
How can I add an image onto an image in PHP like a watermark
its a tool to help track down the lazy copiers who just copy the source code as-is. this is not preventative, nor is it a deterrent. – Ian 12 hours ago
Going by your above comment you are happy with users copying your content, just not without the formatting etc. So what you could do is provide the users an embed type of source code for that particular content just like YouTube does with videos. Into that embed source code you could add your own links back to your site, utilize your own CSS etc.
That way you can still allow the members to use the content but it will always come out the way you intended it with links back to your site.
Thanks
You could always cache a version that uses a special string, like #!username!#, and then later fill it in with PHP based on which user is viewing it.
Another way I believe is to switch from caching on the server to instead let the browser cache it locally for a little. That way it is only cached per user, and it reduces the calls to your database. Because an article is pretty static, you could just let the local computer cache it, and pull in comments via javascript.
This last one is probably not one you are really looking for, but I'm gonna come out and say it anyway. You could not treat your users like thieves, and instead treat the thieves as thieves. Go to the person hosting the servers your content is on and send them an email telling them copyrighted premium content is being hosted on their servers without your permission. You can even automate that process.
How to find out what sites are posting your content? Put a link in the body content to your site, and do a Google Search/Blog Search for articles linking to that site. To automate it, use Google Blog Search because it offers RSS feeds. Any one that has a link back to your site could go into a database with a link to the page, someone could look at it, and if it is the entire article, go do a Whois and send them an email.
What makes you think adding css to something is going to stop people from copying it without that CSS? It's more likely that they are just coping the source of the content you are showing them and ignoring all the styling around it. For example, I use tamper data to look at all HTTP requests made by Firefox, if I can see it on the page, I can see it in the logs. Even with all the "protection" some sites try to put in place, they generally will never work. I can grab what I want, without using any screen capture/recording.
If you were serving flv's, for example, I would easily be able to grab the source of that even if you overlayed it with some CSS. I think the best approach would be to get the sites publishing your premium content and ask them to remove it. It's either that or watermark the actual content on the fly while sending it to the browser.