In my Ruby on Rails app, I use the imdb gem (https://rubygems.org/gems/imdb) to search for a movie by title and grab the poster url and add it to the movie model I have in my database. Then in my view, I put that url in an image source tag and display the image to the user.
I don't have any problems when I'm running my application locally, but when I deploy it to Heroku, sometimes a few images are rendered successfully but for the most part, they aren't displayed properly. I've tried multiple browsers and as it turns out when I try to load the image, I get a "Referral Denied" message saying:
You don't have permission to access "[poster url here]" on this server. Reference #[some ref. number here]
How would I go about fixing this? I'm guessing it's because the IMDB server is denying my access because either I'm making too many requests from my application or because my application doesn't have the necessary credentials to get the data or maybe some combination of both. Is there a way to bypass this at all?
IMDB blocks the direct linking of images from their site on other sites, I think this previous question covers the topic.
The easiest way to get around this is to download the image and host it yourself rather than linking IMDB's copy. Alternatively you could investigate alternative movie DBs to see if they can offer what you want - the answers to this question on IMDB APIs lists a few. The Movie DB API looks like a good bet.
Related
I'm parsing data from other website, and question wether it's better to download images and show them on my own or just links to website images I parsed . Is the link to image by default slower then image from own source?
Couldn't find answer to the simple question. If question is discussable and doesn't belong here, someone comment down please in order to delete it.
Some rules of thumb:
Don't display content on your page which you 'source' from another site without the other sites permission. ('Share this' links provided by youtube are okay, directly linking to the .flv file of someones video from another site to display on yours is not).
Don't copy content from other domains onto your domain without their permission first (doing so would be a copyright violation).
So to answer your question: You should copy the content onto your domain/host, but only if they have given permission to allow this kind of use.
Edit: I am interpreting your question as "I am taking content from another website [and putting it on my own] and I am wondering if I should link directly to their content ( tags pointing to the other domain) or if I should download/copy the content to my website and have my server handle everything?"
The "technical" answer is "it depends on how good your host is compared to the other host when serving content to the average visitor". Compare a page run by Google vs. the same thing run on a home server behind a 56k modem. It matters if you have broadband, but if you're on a 33.3k modem it doesn't.
I would like to build a chrome extension (CE) that pulls data from a ruby db for a specific user. So, in a basic example, if an user submits their favorite color as 'red' and sport as 'tennis' into the db from the core website, when they click the CE, 'red' and 'tennis' will show up no matter where they are on the internet.
Any guidance on how to build something like this? Seems quite simple but am not sure how the CE files fit in with the ruby folder framework.
Also, is it possible to write to a ruby database from a popped out CE? i.e. - submitting 'red' and 'tennis' from the CE to the ruby database to go along with the previous example. Any guidance?
Cheers
This is a very general question so it sounds like you will need to learn a lot. Which can be a good thing :)
Here are the general steps you need:
Look into building an API for your ruby application. This will allow you to get data from your database. For example, you can
make an app where you go to http://yoursite.com/api/favorites and that will return a list of all favorites as JSON. Then in your Chrome Extension you can parse the JSON and display the results to the user. You will probably want to do this using an ajax call (see jquery.ajax for an easy way to use ajax).
Assuming you want user accounts, your user will need to be logged in. Then you can use your user's cookies to verify that they are logged in and show them custom info. i.e. going to http://yoursite.com/api/favorites will just show the favorites for that user, not for everyone.
Finally, submitting things to the database...you can have another route where users can send stuff. For example, if you go to http://yoursite.com/api/favorites/add?color=red then it will add the color red to that user's favorites. You will need to write all the logic for adding stuff to the database...again, it might help you to go through a rails tutorial and then look at building an API.
Related to #3, look into RESTful APIs. A good convention is that if you issue a GET request, you're asking for data, but if you issue a POST request, you are adding data (in your case, creating a new favorite).
Finally, for terminology: it's not a "ruby" database, it's just a database. You can access a database using almost any language, and it sounds like you are accessing it using ruby right now :)
If you only need to store data for one machine browsing anywhere online, chrome has a storage api that would work great.
If you do need a ruby server, I would recommend looking at sinatra.
Now doing Image gallery in rails. I want to block the direct access of downloading image from web page.How do this thing?. I am using paperclip gem to upload image. Please Help me to resolve this problem.
The browser will need to have access to the image in order to display it; at some point something on the browser will have direct access to the image.
You can obfuscate how the image is retrieved, but that's basically the best you can do. You might be able to play minor games with the referrer.
Don't disable right-click; that's irritating.
You're basically looking for a login system, I assume. Make sure the image is served by a controller, not by nginx, or whatever is serving your statics. Given that you're using Paperclip, I assume this is already the case. So really you just need to check for a logged in user inside the controller, and return a 403 response or something if the user isn't logged in.
We have members-only paid content that is frequently copied and republished without our permission.
We are trying to ‘watermark’ our content by including each customer’s user id in a fake css class, for example <p class='userid_1234'> (except not so obivous, of course :), that would help us track the source of the copying, and then we place that class somewhere in the article body.
The problem is, by including user-specific information into an article, it makes it so that the article content is ineligible for caching because it is now unique to each user.
This bumps the page load time from ~.8ms to ~2.5sec for each article page view.
Does anyone know of any watermarking strategies that can still be used with caching?
Alternatively, what can be done to speed up database access? ( ha, ha, that there’s just a tiny topic i’m sure.. )
We're using the CMS Expression Engine, but I'd like to hear about any strategies. They don't have to be EE-specific.
If you're talking about images then you could use PHP to add a watermark to the images.
How can I add an image onto an image in PHP like a watermark
its a tool to help track down the lazy copiers who just copy the source code as-is. this is not preventative, nor is it a deterrent. – Ian 12 hours ago
Going by your above comment you are happy with users copying your content, just not without the formatting etc. So what you could do is provide the users an embed type of source code for that particular content just like YouTube does with videos. Into that embed source code you could add your own links back to your site, utilize your own CSS etc.
That way you can still allow the members to use the content but it will always come out the way you intended it with links back to your site.
Thanks
You could always cache a version that uses a special string, like #!username!#, and then later fill it in with PHP based on which user is viewing it.
Another way I believe is to switch from caching on the server to instead let the browser cache it locally for a little. That way it is only cached per user, and it reduces the calls to your database. Because an article is pretty static, you could just let the local computer cache it, and pull in comments via javascript.
This last one is probably not one you are really looking for, but I'm gonna come out and say it anyway. You could not treat your users like thieves, and instead treat the thieves as thieves. Go to the person hosting the servers your content is on and send them an email telling them copyrighted premium content is being hosted on their servers without your permission. You can even automate that process.
How to find out what sites are posting your content? Put a link in the body content to your site, and do a Google Search/Blog Search for articles linking to that site. To automate it, use Google Blog Search because it offers RSS feeds. Any one that has a link back to your site could go into a database with a link to the page, someone could look at it, and if it is the entire article, go do a Whois and send them an email.
What makes you think adding css to something is going to stop people from copying it without that CSS? It's more likely that they are just coping the source of the content you are showing them and ignoring all the styling around it. For example, I use tamper data to look at all HTTP requests made by Firefox, if I can see it on the page, I can see it in the logs. Even with all the "protection" some sites try to put in place, they generally will never work. I can grab what I want, without using any screen capture/recording.
If you were serving flv's, for example, I would easily be able to grab the source of that even if you overlayed it with some CSS. I think the best approach would be to get the sites publishing your premium content and ask them to remove it. It's either that or watermark the actual content on the fly while sending it to the browser.
I'm trying to find a way of finding out who is downloading what image from an image gallery. Users can download using a button beside the thumbnail or right click and use the "save link as" Is it possible to relate a user session or ID to a "save link as" action from all browsers using either PHP or JavaScript.
Yes, my preferred way of doing this would be via PHP. You'd have to set up a script which would load up the file and send it to the user browser. This script would also be able to log the download somewhere (e.g. your database).
For example - in very rough pseudo-code:
download.php
$file = $_GET['file'];
updateFileCount($file);
header('Content-Type: image/jpeg');
sendFile($file);
Then, you just have your download link point to download.php instead of the actual file. (Note that updateFileCount and sendFile are functions that you would have to provide, of course - this script is an example of a download script which you could use)
Note: I highly recommend avoiding the use of $_GET['file'] to get the whole filename - malicious users could use it to retrieve sensitive files from your web server. But the safe use of PHP downloads is a topic for another question.
You need a gateway script, like ImageDownload.php?picture=me.jpg, or something like that.
That page whould return the image bytes, as well as logging that the image is downloaded.
Because the images being saved are on their computer locally there would be no way to get that kind of information as they have already retrieved the image from your system. Even with javascript the best I know that you could do is to log each time a user presses the second mousebutton using some kind of ajax'y stuff.
I don't really like the idea, but if you wanted to log everytime someone downloaded an image you could host the images inside a flash or java app that made it a requirement to click a download image button. That way the only way for them to get the image without doing that would be to either capture packets as they came into their side or take a screenshot.
Your server access logs should already have the request for the non-thumbnailed version of the file, so you just need to modify the log format to include the sessionid, which I presume you can map back to a user.
I agree strongly with the suggestion put forward by Phill Sacre. For what you are looking for this is the way to go.
It also has the benefit of being potentially able to keep the tracked files out of the direct web path so that they can't be direct linked to.
I use this method in a client site where the images are paid content so must be restricted access.