I'm trying to find a way of finding out who is downloading what image from an image gallery. Users can download using a button beside the thumbnail or right click and use the "save link as" Is it possible to relate a user session or ID to a "save link as" action from all browsers using either PHP or JavaScript.
Yes, my preferred way of doing this would be via PHP. You'd have to set up a script which would load up the file and send it to the user browser. This script would also be able to log the download somewhere (e.g. your database).
For example - in very rough pseudo-code:
download.php
$file = $_GET['file'];
updateFileCount($file);
header('Content-Type: image/jpeg');
sendFile($file);
Then, you just have your download link point to download.php instead of the actual file. (Note that updateFileCount and sendFile are functions that you would have to provide, of course - this script is an example of a download script which you could use)
Note: I highly recommend avoiding the use of $_GET['file'] to get the whole filename - malicious users could use it to retrieve sensitive files from your web server. But the safe use of PHP downloads is a topic for another question.
You need a gateway script, like ImageDownload.php?picture=me.jpg, or something like that.
That page whould return the image bytes, as well as logging that the image is downloaded.
Because the images being saved are on their computer locally there would be no way to get that kind of information as they have already retrieved the image from your system. Even with javascript the best I know that you could do is to log each time a user presses the second mousebutton using some kind of ajax'y stuff.
I don't really like the idea, but if you wanted to log everytime someone downloaded an image you could host the images inside a flash or java app that made it a requirement to click a download image button. That way the only way for them to get the image without doing that would be to either capture packets as they came into their side or take a screenshot.
Your server access logs should already have the request for the non-thumbnailed version of the file, so you just need to modify the log format to include the sessionid, which I presume you can map back to a user.
I agree strongly with the suggestion put forward by Phill Sacre. For what you are looking for this is the way to go.
It also has the benefit of being potentially able to keep the tracked files out of the direct web path so that they can't be direct linked to.
I use this method in a client site where the images are paid content so must be restricted access.
Related
I've tried some of the services out there, including droplet, ctrlq.org/save, and some other sites that support directly fetching a file from a url and uploading it to dropbox, google drive and the like. Without the user having to store the file on a local disk.
Now the problem is none of these services support multiple urls or batch uploading, but I have quite a few urls and I really need a service where I can put them in, split them with enters or semicolons, and have the files uploaded to dropbox.(or any other cloud storage)
Any help would be gladly appreciated.
The Dropbox Saver JavaScript control allows you to save up to 100 files to the user's Dropbox in one shot. You'll need to programmatically create the button using Dropbox.createSaveButton as explained in the linked page.
It seems like the 100-file limit (at any one time) is universal, but you might find that it isn't the case when using the DropBox REST API. It looks possible to do this with NodeJS server side (OAuth and posts) or Javascript client side (automating FileReader). I'll review and try to add content so these aren't just links.
If you can leave a page open for about 20 minutes due to "technical limitations", the dropbox should be loadable 100-at-a-time like that, assuming each upload takes less than 2 seconds; it's an easy hook to add a progress indicator.
If you're preloading the dropbox once yourself or the initial load is compatible with manual action, perhaps mapping a drive and trying to unzip an archive of your links to it would work. If your list of links isn't extremely volatile then the REST API could be used to synchronize changes.
Edit: Forgot to include this page on CloudConvert, which unzips archives containing up to 100 files into DropBox. Your use case doesn't seem to include retrieving the actual content at your servers (generated zip files), sending the automation list to the browser and then having the browser extract to dropbox, but it's another option.
The Dropbox API now offers the ability to save a file into Dropbox directly via a URL. There's a blog post about it here:
https://blogs.dropbox.com/developers/2015/06/programmatically-saving-a-url-to-dropbox/
The documentation can be found here:
https://www.dropbox.com/developers/core/docs#save-url
If i have a filename for a local file on the computer:
$img = "deskfile:///D%3A%2FSCANS%2F%23AUKT%2Fimg2014%2F2014-06+SP%2FEPSON007.jpg"
how can i upload it the server without using the "file selector"?
If i enter the file adresses in the url window of a browser i can display the image.
But if i load the image in tag they won't display. I've read it's becuse of restrictions in the browser.
I can't add a value caluse to the either.
Is there anyway to upload the image from the string?
Or can i at least open the correct directory in the "file selector" so the user wont have the browse the whole computer when looking for the file?
Yes there is a way. You can use the File API with Html5 and/or a polyfill for this to load the image in the browser before posting it back to the server. The best such polyfill that I know of is called Moxy/Plupload. It includes Flash and Silverlight fallbacks for older browsers.
You can display the image because it is stored locally in your computer. How do you know where is the image going to be in the user's computer. The only way to access the user's file system is through the file selector, once the user has selected a file you can then use any API to save that file in the server on your terms, but you will not be able to see each of your users file system from you page (security reasons). Could you elaborate more in what you are trying to accomplish? What exactly are you trying to do?
I need to create, for a specific project, an image manager that works via Ajax (to get the list of images, display them, ...).
The upload of new images, or image modification, is done via an Ajax script (using the new javascript File API).
The upload works fine, but I encounter a problem in case of image modification : the image displayed by the browser after upload is the cached one and not the uploaded one !!
I know it's a classic cache problem, that can be solved via the 'imagesrc?new Date.getTime()' hack, but I can't use it here.
in fact, this hack doesn't really reload the image, it only create a new instance of the image into the cache, associated to the image url 'imagesrc?new Date.getTime()'.
So, if at any moment, into the image manager, I retry to display the image, without adding the '?new Date.getTime()' to the src, it will display again the old image.
And I either cannot add this hack systematically (because, for example, if the image manager needs to display a lot of very heavy images, it's usefull to get them from the browser cache until they are modified).
I searched a way to solve this problem on internet (really replace the cached image after a javascript upload instead of using the above hack), but I found nothing.
Is there a way to do this, or is it totally impossible ?
Any help or suggestion would be greatly appreciated.
Many thanks in advance
Olivier
Configure your server to send ETag-headers for the images.
An ETag is a hash-value of the file that changes when the file is modified. If an ETag is sent, the browser will add an If-None-Match-header containing the last received ETag of that ressource on its next request and the server will respond with 304: not modified to save traffic if nothing has changed or send the new file if there is one.
Using the jQuery wrapped version of Fineuploader v3.3.
Is it possible to populate the file list with files already in the upload folder?
I think "_addToList(id, name)" should do the trick, but I can't get it to work. Any ideas?
Seems that they are currently working on this feature:
https://github.com/Widen/fine-uploader/issues/784
So, this will be available soon.
This is not a behavior that Fine Uploader currently supports. Fine Uploader only displays files that users have submitted to the uploader since the current uploader instance was created. It doesn't try to be an all-in-one web application. You could probably add your own item to the list/UI via javascript. That probably wouldn't be terribly difficult, but seems like an odd thing to do.
If you'd like to discuss your specific use case more, please open up a feature request in the Github issue tracker.
Generally, client side code cannot add stored or hard-coded path based file names for use in any type of POST or upload operation. Obviously this is a security measure, you can imagine if a malicious web page could add to a generic POST operation some type of baked in file name. So from what I understand, only the user can specify path based file names, via a file browser for the session that it is included in. This applies to HTML/JavaScript/jQuery but am unsure if Flash/Silverlight based solutions would also be limited. I think a Java based uploader would be free of this. But you are just moving closer and closer to installed software.
We have members-only paid content that is frequently copied and republished without our permission.
We are trying to ‘watermark’ our content by including each customer’s user id in a fake css class, for example <p class='userid_1234'> (except not so obivous, of course :), that would help us track the source of the copying, and then we place that class somewhere in the article body.
The problem is, by including user-specific information into an article, it makes it so that the article content is ineligible for caching because it is now unique to each user.
This bumps the page load time from ~.8ms to ~2.5sec for each article page view.
Does anyone know of any watermarking strategies that can still be used with caching?
Alternatively, what can be done to speed up database access? ( ha, ha, that there’s just a tiny topic i’m sure.. )
We're using the CMS Expression Engine, but I'd like to hear about any strategies. They don't have to be EE-specific.
If you're talking about images then you could use PHP to add a watermark to the images.
How can I add an image onto an image in PHP like a watermark
its a tool to help track down the lazy copiers who just copy the source code as-is. this is not preventative, nor is it a deterrent. – Ian 12 hours ago
Going by your above comment you are happy with users copying your content, just not without the formatting etc. So what you could do is provide the users an embed type of source code for that particular content just like YouTube does with videos. Into that embed source code you could add your own links back to your site, utilize your own CSS etc.
That way you can still allow the members to use the content but it will always come out the way you intended it with links back to your site.
Thanks
You could always cache a version that uses a special string, like #!username!#, and then later fill it in with PHP based on which user is viewing it.
Another way I believe is to switch from caching on the server to instead let the browser cache it locally for a little. That way it is only cached per user, and it reduces the calls to your database. Because an article is pretty static, you could just let the local computer cache it, and pull in comments via javascript.
This last one is probably not one you are really looking for, but I'm gonna come out and say it anyway. You could not treat your users like thieves, and instead treat the thieves as thieves. Go to the person hosting the servers your content is on and send them an email telling them copyrighted premium content is being hosted on their servers without your permission. You can even automate that process.
How to find out what sites are posting your content? Put a link in the body content to your site, and do a Google Search/Blog Search for articles linking to that site. To automate it, use Google Blog Search because it offers RSS feeds. Any one that has a link back to your site could go into a database with a link to the page, someone could look at it, and if it is the entire article, go do a Whois and send them an email.
What makes you think adding css to something is going to stop people from copying it without that CSS? It's more likely that they are just coping the source of the content you are showing them and ignoring all the styling around it. For example, I use tamper data to look at all HTTP requests made by Firefox, if I can see it on the page, I can see it in the logs. Even with all the "protection" some sites try to put in place, they generally will never work. I can grab what I want, without using any screen capture/recording.
If you were serving flv's, for example, I would easily be able to grab the source of that even if you overlayed it with some CSS. I think the best approach would be to get the sites publishing your premium content and ask them to remove it. It's either that or watermark the actual content on the fly while sending it to the browser.