Track attach file in browser using extension - go

I'm currently researching for the method of tracking event that client user attachs a file in the browser.
Which means whenever user tries to upload a file to place such as: gmail, messenger, facebook, slack, etc; I can grab the info of that file and intercept if I want.
What is the info of attach file?
Basic metadata information: File name, size, file format
Content of the file (if the file is human readable: text, doc)
What are the intercept actions?
Delay for a specific amount of time: user cannot send file until this delay time is over
Block attach file:
Method 1: block uploading file
Method 2: block sending
When will I intercept?
When file name or file content contains the keywords in my blacklist
Briefly, those are my aims. If you're wondering why I'm doing this, what I can only say is I want to prevent sending private files to the network through "browser" (Chrome, Edge, Firefox, etc)
Now, I'm quite lost in the document of developing extension and desperately asking for help.
My questions are:
Could I achieve those goals using browser extension? And are there any successfully solutions or ideas that you can recommend?
Could I intercept so far in popular browser, i.e: Chrome, Edge, Firefox, etc? Or the solution will only works in Chrome!!!
P.S: Other solutions without browser extension will also be appreciated. (Especially in Go)

Related

How to use Open Graph on non-html pages

I'm building an app where users can share download links to files. These files are served using golang's http.ServeContent, so they are sent as is, without any HTML. However, when these files are shared on social media platforms or a messaging service, I want to be able to display an image à-la Open Graph.
Is it possible to have Open Graph metadata tags show up for these non-HTML pages?
If it's not, is there any way to embed this content in an HTML5 page while still triggering a download of the file (and not the HTML page) when used with something like, e.g., curl?
Follow up question if none of these are possible, is there anything else I could use to have an image and a title show up when my link is shared?
I suggest not linking the direct file, but having an actual download page for them, so that the file is not linked directly, but its download page.
On the download page you could then implement the appropriate share buttons and initiate the download through a bit of JavaScript.
Alternatively you could inspect if a bot (like facebook, telegram, skype, etc) is visiting the files location and then display the appropriate open graph or twitter headers.
Example of a user agent parser: https://github.com/mssola/user_agent

Pdf files are not getting updated after docusign

We are using Embedded signing of DocuSign REST API to e-sign files.To sign a file, we upload the required file to our web app and then display it a viewer in the browser. This file can be signed immediately or later.
What is happening is that when the file is signed and the process is completed, we return to the same file view but the updated file is not reflected. Only when we refresh page like 3-4 times, it shows the sign on the file.
This issue comes only for files that were uploaded and signed later. For a fresh file which is uploaded and signed immediately, we get the updated file view.
It appears that all the browsers cache files (not HTML page, but the embedded files). The recommended solutions suggest to either add a parameter in the request when file is reloaded after signing- but this works only intermittently. The other is to rename the file so that the browser picks the updated file. But renaming file is not an option for us.
Is there some other alternative? Have any other DocuSign API users ever faced something similar? (I believe this issue would not come if you use email request mode for e-signing)
Thanks.
There have been no similar reports from anyone... I am not discounting yours necessarily but when you just write up something about your web app I could think of a few things that your web app could be doing out of sequence to see this behavior.
The first common mistake with embedded signing that comes to mind is this. In general embedded signing requires several steps (1) login call (2) create envelope (3) get the view of the recipients.
Most of the people put that logic in the controller code behind a web page so when they come back it goes through the same sequence. I understand that your page has some logic to maybe guard against it, but ideally on the "viewing" you should only call (3) - getting the view. If you somehow end up calling (2) again - you will see the signing sequence all over.
That's the most common mistake. However I do not want to discount your report. In order to actually get to the bottom of it you should post the web service call traces (XML for SOAP / JSON for REST) and show exactly what your app is doing.
Hope this helps.
-mb // i work for docusign

fineuploader iframe functionality (IE7-9)

So I've been looking for a solution to implement in my site that allows for multiple files and large files (>2 GBs) to be uploaded, without using any plug-ins, desktop clients, etc. I also have a requirement to support browsers as far back as IE 7. FineUploader seems to fit the bill perfectly, but one aspect I have been trying to figure out is how it uses iframe to support non-HTML5 browsers? Is it basically serving up HTML content, so it still allows users to upload files, but with legacy limitations (one file at a time, not able to read file size prior to upload, etc.)? What functionality of FineUploader do I lose in non-HTML5 browsers?
Thanks,
Stas
I'm the maintainer of Fine Uploader and I will provide an answer to your questions.
For browsers that do not support the File API (IE9 and older, Android 2.3.x) Fine Uploader must rely on a commonly known "trick" to allow for "ajax" uploading. In these browsers, you must submit a form containing a file input element (one for each file). Fine Uploader creates a hidden iframe containing a form and a file input for the associated file. A separate iframe is created for each selected file. Fine Uploader then submits the form when it comes time to upload the associated file or files. The response text from the server is loaded into this iframe when the server response is received, and the library parses this response (which must be a valid JSON response, regardless of the browser).
The following limitations are in place on non-File API browsers:
You can only select one file at a time (one per "choose a file" dialog). This is due to the fact that none of these browsers do not support the multiple attribute on file input elements.
Dragging and dropping of files is not supported. This feature depends on File API support.
Progress bars do not appear, as there is no easy way to determine the upload progress of a file in browsers that do not support the File API. There may be efforts in the future to allow for progress calculation, such as a documented convention that results in periodic GET requests to check the progress, or support for the UploadProgress module in nginx or apache.
Client-side file size information is not available. So, any features related to or dependent on file size are not enabled. This information is simply not available unless the browser supports the File API.
Chunking and auto-resume features are not enabled since this explicitly depend on File API support.
Luckily, all "modern" browsers, including IE10, support the File API.
Hope this helps.

Automate website log-in and form filling?

I'm trying to log in to a website and save an HTML page automatically (I want to be able to do this on a regular time interval). From the surface, this is a typical modern website where, if the user navigates directly to a "locked" URL, a log-in form pops up, and after logging in, the user is redirected to the intended page.
I gave mechanize a shot (http://wwwsearch.sourceforge.net/mechanize/) but it wasn't finding some form elements which were needed for login (hidden elements that have some values put in by a javascript function that runs when the user clicks the "log in" button).
I played a bit with the "web browser" control in .NET but quickly lost interest because I couldn't even get it to submit a query on the Google page.
I don't care what the language is; I'll learn it to solve this problem. At a minimum it has to work in Windows.
A simple example, say, typing in a query into the Google search box would be a great bonus.
In my experience, the most reliable way is to use javascript. It works well in .Net. To test, browse to the following addresses one after another in Firefox or Internet Explorer:
http://www.google.com
javascript:function f(){document.forms[0]['q'].value='stackoverflow';}f();
javascript:document.forms[0].submit()
That performs a search for "stackoverflow" on Google. To do it in VB .Net using the webbrowser control, do this:
WebBrowser1.Navigate("http://www.google.com")
Do While WebBrowser1.IsBusy OrElse WebBrowser1.ReadyState <> WebBrowserReadyState.Complete
Threading.Thread.Sleep(1000)
Application.DoEvents()
Loop
WebBrowser1.Navigate("javascript:function%20f(){document.forms[0]['q'].value='stackoverflow';}f();")
Threading.Thread.Sleep(2000) 'wait for javascript to run
WebBrowser1.Navigate("javascript:document.forms[0].submit()")
Threading.Thread.Sleep(2000) 'wait for javascript to run
Notice how the space in the URL is converted to %20. I'm not certain if this is necessary but it can't hurt. It is important that the first javascript be in a function. The calls to Sleep() are to wait for Google to load and also for the javascript stuff. The Do While Loop might run forever if the page fails to load so for automation purposes have a counter that will timeout after, say, 60 seconds.
Of course, for Google you can just navigate directly to www.google.com?q=stackoverflow but if your site has hidden input fields, etc, then this is the way to go. Only works for HTML sites - flash is a whole other matter.
If I understand you right, you want to log in to only one webpage, and that form always stays the same. You could either reverse engineer the java script, or debug it via a javascript debugger in the browser (e.g. firebug for firefox). Or you can fill in the form in your browser and look at the http request via a network packet sniffer. Once you have all required form data to submit, you can do the same with your program (thats what I did the last time I had a pretty similar task to do). dont forget to store all cookie data you requested back from the webserver and send it with the next request, to 'stay logged in'.
Its being already discussed here.
Basically its gist is you can use selenium, an open source web automation tool, which has api library available in various languages like java, ruby, etc.
Neoload can handle the form filling with authentication, assuming you don't want to collect data, just perform actions. It's a web stress tool, so it's not really meant to be used as a time-based service, but you COULD just leave it running.
I've used Ruby and Watir (a web app testing suite) for something similar, but it was a very small task (basically visiting URLs from a text file and downloading an image).
There's also an extension called iMacros that can do some automation, but I'm not personally familiar with it (just aware of it).
"I'm trying to log in to a website and save an HTML page automatically"
SAVEAS TYPE=HTM FOLDER=C: FILE=page.html
https://addons.mozilla.org/en-US/firefox/addon/imacros-for-firefox/?src=search
This commands played in iMacros addon will save the page on C: drive and name it page.html
Also,
URL GOTO=www.website.com
Goes on the particular website you want to save. You can also use scripting in iMacros and set different websites in macro.

Logging image downloads

I'm trying to find a way of finding out who is downloading what image from an image gallery. Users can download using a button beside the thumbnail or right click and use the "save link as" Is it possible to relate a user session or ID to a "save link as" action from all browsers using either PHP or JavaScript.
Yes, my preferred way of doing this would be via PHP. You'd have to set up a script which would load up the file and send it to the user browser. This script would also be able to log the download somewhere (e.g. your database).
For example - in very rough pseudo-code:
download.php
$file = $_GET['file'];
updateFileCount($file);
header('Content-Type: image/jpeg');
sendFile($file);
Then, you just have your download link point to download.php instead of the actual file. (Note that updateFileCount and sendFile are functions that you would have to provide, of course - this script is an example of a download script which you could use)
Note: I highly recommend avoiding the use of $_GET['file'] to get the whole filename - malicious users could use it to retrieve sensitive files from your web server. But the safe use of PHP downloads is a topic for another question.
You need a gateway script, like ImageDownload.php?picture=me.jpg, or something like that.
That page whould return the image bytes, as well as logging that the image is downloaded.
Because the images being saved are on their computer locally there would be no way to get that kind of information as they have already retrieved the image from your system. Even with javascript the best I know that you could do is to log each time a user presses the second mousebutton using some kind of ajax'y stuff.
I don't really like the idea, but if you wanted to log everytime someone downloaded an image you could host the images inside a flash or java app that made it a requirement to click a download image button. That way the only way for them to get the image without doing that would be to either capture packets as they came into their side or take a screenshot.
Your server access logs should already have the request for the non-thumbnailed version of the file, so you just need to modify the log format to include the sessionid, which I presume you can map back to a user.
I agree strongly with the suggestion put forward by Phill Sacre. For what you are looking for this is the way to go.
It also has the benefit of being potentially able to keep the tracked files out of the direct web path so that they can't be direct linked to.
I use this method in a client site where the images are paid content so must be restricted access.

Resources