I'm building an app where users can share download links to files. These files are served using golang's http.ServeContent, so they are sent as is, without any HTML. However, when these files are shared on social media platforms or a messaging service, I want to be able to display an image à-la Open Graph.
Is it possible to have Open Graph metadata tags show up for these non-HTML pages?
If it's not, is there any way to embed this content in an HTML5 page while still triggering a download of the file (and not the HTML page) when used with something like, e.g., curl?
Follow up question if none of these are possible, is there anything else I could use to have an image and a title show up when my link is shared?
I suggest not linking the direct file, but having an actual download page for them, so that the file is not linked directly, but its download page.
On the download page you could then implement the appropriate share buttons and initiate the download through a bit of JavaScript.
Alternatively you could inspect if a bot (like facebook, telegram, skype, etc) is visiting the files location and then display the appropriate open graph or twitter headers.
Example of a user agent parser: https://github.com/mssola/user_agent
Related
I'm currently researching for the method of tracking event that client user attachs a file in the browser.
Which means whenever user tries to upload a file to place such as: gmail, messenger, facebook, slack, etc; I can grab the info of that file and intercept if I want.
What is the info of attach file?
Basic metadata information: File name, size, file format
Content of the file (if the file is human readable: text, doc)
What are the intercept actions?
Delay for a specific amount of time: user cannot send file until this delay time is over
Block attach file:
Method 1: block uploading file
Method 2: block sending
When will I intercept?
When file name or file content contains the keywords in my blacklist
Briefly, those are my aims. If you're wondering why I'm doing this, what I can only say is I want to prevent sending private files to the network through "browser" (Chrome, Edge, Firefox, etc)
Now, I'm quite lost in the document of developing extension and desperately asking for help.
My questions are:
Could I achieve those goals using browser extension? And are there any successfully solutions or ideas that you can recommend?
Could I intercept so far in popular browser, i.e: Chrome, Edge, Firefox, etc? Or the solution will only works in Chrome!!!
P.S: Other solutions without browser extension will also be appreciated. (Especially in Go)
I've tried some of the services out there, including droplet, ctrlq.org/save, and some other sites that support directly fetching a file from a url and uploading it to dropbox, google drive and the like. Without the user having to store the file on a local disk.
Now the problem is none of these services support multiple urls or batch uploading, but I have quite a few urls and I really need a service where I can put them in, split them with enters or semicolons, and have the files uploaded to dropbox.(or any other cloud storage)
Any help would be gladly appreciated.
The Dropbox Saver JavaScript control allows you to save up to 100 files to the user's Dropbox in one shot. You'll need to programmatically create the button using Dropbox.createSaveButton as explained in the linked page.
It seems like the 100-file limit (at any one time) is universal, but you might find that it isn't the case when using the DropBox REST API. It looks possible to do this with NodeJS server side (OAuth and posts) or Javascript client side (automating FileReader). I'll review and try to add content so these aren't just links.
If you can leave a page open for about 20 minutes due to "technical limitations", the dropbox should be loadable 100-at-a-time like that, assuming each upload takes less than 2 seconds; it's an easy hook to add a progress indicator.
If you're preloading the dropbox once yourself or the initial load is compatible with manual action, perhaps mapping a drive and trying to unzip an archive of your links to it would work. If your list of links isn't extremely volatile then the REST API could be used to synchronize changes.
Edit: Forgot to include this page on CloudConvert, which unzips archives containing up to 100 files into DropBox. Your use case doesn't seem to include retrieving the actual content at your servers (generated zip files), sending the automation list to the browser and then having the browser extract to dropbox, but it's another option.
The Dropbox API now offers the ability to save a file into Dropbox directly via a URL. There's a blog post about it here:
https://blogs.dropbox.com/developers/2015/06/programmatically-saving-a-url-to-dropbox/
The documentation can be found here:
https://www.dropbox.com/developers/core/docs#save-url
Typically, if it were a regular page-by-page website, I would install the analytics javascript before the body tag.
But, with a site where content is on overlays, how can analytics be installed to track views? (i.e. a one page portfolio site)
Thanks for any insight!
See Tracking Google Analytics Page Views with Angular.js. Even though I'm not sure if you are using something like Angular or just straight javascript, you could use a similar technique described there with hash urls that are set when a user clicks on a different part of the page, that way you could track how a user interacts with your single page site by making different urls for their interactions.
For more information see Pushing Functions onto the Queue.
In javascript that calls the overlay, you can add:
_gaq.push(['_trackPageview', '/url/of/page']);
or:
ga('send','pageview','/url/of/page');
I am using HTML5 offline storage. The goal is to make the whole site available offline. So intuitively, no server requests means all the pages need to be on the client. The only way I know of to accomplish such a task is to make the site into one page then show hide portions with jquery when the user "navigates". Is there a better way?
The html 5 offline spec allows multiple pages to be saved offline so you don't need to put all your content onto one page.
EDIT: link to spec http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html
Be careful that your jquery does not still point to the cloud. You'll need to save the relevant .js files locally.
N.B. If your whole site can be generated and saved as individual .html files then all you need to do is to save these files in the correct (relative) directory structure.
I'm trying to find a way of finding out who is downloading what image from an image gallery. Users can download using a button beside the thumbnail or right click and use the "save link as" Is it possible to relate a user session or ID to a "save link as" action from all browsers using either PHP or JavaScript.
Yes, my preferred way of doing this would be via PHP. You'd have to set up a script which would load up the file and send it to the user browser. This script would also be able to log the download somewhere (e.g. your database).
For example - in very rough pseudo-code:
download.php
$file = $_GET['file'];
updateFileCount($file);
header('Content-Type: image/jpeg');
sendFile($file);
Then, you just have your download link point to download.php instead of the actual file. (Note that updateFileCount and sendFile are functions that you would have to provide, of course - this script is an example of a download script which you could use)
Note: I highly recommend avoiding the use of $_GET['file'] to get the whole filename - malicious users could use it to retrieve sensitive files from your web server. But the safe use of PHP downloads is a topic for another question.
You need a gateway script, like ImageDownload.php?picture=me.jpg, or something like that.
That page whould return the image bytes, as well as logging that the image is downloaded.
Because the images being saved are on their computer locally there would be no way to get that kind of information as they have already retrieved the image from your system. Even with javascript the best I know that you could do is to log each time a user presses the second mousebutton using some kind of ajax'y stuff.
I don't really like the idea, but if you wanted to log everytime someone downloaded an image you could host the images inside a flash or java app that made it a requirement to click a download image button. That way the only way for them to get the image without doing that would be to either capture packets as they came into their side or take a screenshot.
Your server access logs should already have the request for the non-thumbnailed version of the file, so you just need to modify the log format to include the sessionid, which I presume you can map back to a user.
I agree strongly with the suggestion put forward by Phill Sacre. For what you are looking for this is the way to go.
It also has the benefit of being potentially able to keep the tracked files out of the direct web path so that they can't be direct linked to.
I use this method in a client site where the images are paid content so must be restricted access.