I'm trying to capture and handle every single request a web page, or a plugin in it is about to make.
For example, if you open the console, and enable Net logging, when a HTTP request is about to be sent, console shows it there.
I want to capture every link and call my function even when a video is loaded by flash player (which is logged in console also, if it is http).
Can anyone guide me what I should do, or where I should get started?
Edit: I want to be able to cancel the request and handle it my way if needed.
You can use the Jetpack SDK to get most of what you need, I believe. If you register to system events and listen for http-on-modify-request, you can use the nsIHttpChannel methods to modify the response and request
let { Ci } = require('chrome');
let { on } = require('sdk/system/events');
let { newURI } = require('sdk/url/utils');
on('http-on-modify-request', function ({subject, type, data}) {
if (/google/.test(subject.URI.spec)) {
subject.QueryInterface(Ci.nsIHttpChannel);
subject.redirectTo(newURI('http://mozilla.org'));
}
});
Additional info, "Intercepting Page Loads"
non sdk version and with much much more control and detail:
this allows you too look at the flags so you can only watch LOAD_DOCUMENT_URI which is frames and main window. main window is always LOAD_INITIAL_DOCUMENT_URI
https://github.com/Noitidart/demo-on-http-examine
https://github.com/Noitidart/demo-nsITraceableChannel - in this one you can see the source before it is parsed by the browser
in these examples you see how to get the contentWindow and browserWindow from the subject as well, you can apply this to sdk example, just use the "subject"
also i prefer to use http-on-examine-response, even in sdk version. because otherwise you will see all the pages it redirects FROM, not the final redirect TO. say a url blah.com redirects you to blah.com/1 and then blah.com/2
only blah.com/2 has a document, so on modify you see blah.com and blah.com/1, they will have flags LOAD_REPLACE, typically they redirect right away so the document never shows, if it is a timed redirect you will see the document and will also see LOAD_INITIAL_DOCUMENT_URI flag, im guessing i havent experienced it myself
Related
I have a process that activates a browser, which makes a request to a local server.
The server should respond but I do not know how to see, client side, the answer.
I need that is the browser that makes the request. I do not want to write it myself with http.NewRequest.
client.go:
func openChrome() {
var page = "https://localhost:1333/"
program := "C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe"
url := []string{"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe", page}
attr := &os.ProcAttr{
Files: []*os.File{os.Stdin, os.Stdout, os.Stderr},
}
proc,err:=os.StartProcess(program, url, attr)
proc.Wait();
// if err!=nil, log.Fatal(err)
}
To see the response and have easier control over requests could use a tool like chromedp, chromedriver (with webdriver) or selenium to load pages and find the response in the browser. All of these should be accessible to varying degrees from go, and can be used to drive the browser to go through the standard request cycle as if a human were doing it (to load content and query what is loaded).
You can see the stdout and stderr from a process you launch but that is unlikely to help you for this particular task so I'm ignoring that.
You don't give a reason for excluding a direct request, but for full control this would be another possibility, and unless you need to render html your code can do everything a browser could do (change user agent, parse html etc).
I'm implementing an invisible reCAPTCHA as per the instructions in the documentation: reCAPTCHA V2 documentation
I've managed to implement it without any problems. But, what I'd like to know is whether I can simulate being a robot for testing purposes?
Is there a way to force the reCAPTCHA to respond as if it thought I was a robot?
Thanks in advance for any assistance.
In the Dev Tools, open Settings, then Devices, add a custom device with any name and user agent equal to Googlebot/2.1.
Finally, in Device Mode, at the left of the top bar, choose the device (the default is Responsive).
You can test the captcha in https://www.google.com/recaptcha/api2/demo?invisible=true
(This is a demo of the Invisible Recaptcha. You can remove the url invisible parameter to test with the captcha button)
You can use a Chrome Plugin like Modify Headers and Add a user-agent like Googlebot/2.1 (+http://www.google.com/bot.html).
For Firefox, if you don't want to install any add-ons, you can easily manually change the user agent :
Enter about:config into the URL box and hit return;
Search for “useragent” (one word), just to check what is already there;
Create a new string (right-click somewhere in the window) titled (i.e. new
preference) “general.useragent.override”, and with string value
"Googlebot/2.1" (or any other you want to test with).
I tried this with Recaptcha v3, and it indeed returns a score of 0.1
And don't forget to remove this line from about:config when done testing !
I found this method here (it is an Apple OS article, but the Firefox method also works for Windows) : http://osxdaily.com/2013/01/16/change-user-agent-chrome-safari-firefox/
I find that if you click on the reCaptcha logo rather than the text box, it tends to fail.
This is because bots detect clickable hitboxes, and since the checkbox is an image, as well as the "I'm not a robot" text, and bots can't process images as text properly, but they CAN process clickable hitboxes, which the reCaptcha tells them to click, it just doesn't tell them where.
Click as far away from the checkbox as possible while keeping your mouse cursor in the reCaptcha. You will then most likely fail it. ( it will just bring up the thing where you have to identify the pictures).
The pictures are on there because like I said, bots can't process images and recognize things like cars.
yes it is possible to force fail a recaptcha v2 for testing purposes.
there are two ways to do that
First way :
you need to have firefox browser for that just make a simple form request
and then wait for response and after getting response click on refresh button firefox will prompt a box saying that " To display this page, Firefox must send information that will repeat any action (such as a search or order confirmation) that was performed earlier. " then click on "resend"
by doing this browser will send previous " g-recaptcha-response " key and this will fail your recaptcha.
Second way
you can make any simple post request by any application like in linux you can use curl to make post request.
just make sure that you specify all your form filed and also header for request and most important thing POST one field name as " g-recaptcha-response " and give any random value to this field
Just completing the answer of Rafael, follow how to use the plugin
None of proposed answers worked for me. I just wrote a simple Node.js script which opens a browser window with a page. ReCaptcha detects automated browser and shows the challenge. The script is below:
const puppeteer = require('puppeteer');
let testReCaptcha = async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await page.goto('http://yourpage.com');
};
testReCaptcha();
Don't forget to install puppeteer by running npm i puppeteer and change yourpage.com to your page address
How can you detect the url that I am browsing in chrome/safari/firefox via cocoa (desktop app)?
As a side but related note, are there any security restrictions when developing a desktop app that the user will be alerted and asked if they want to allow? e.g. if the app accesses their contact information etc.
Looking for a cocoa based solution, not javascript.
I would do this as an extension, and because you would like to target Chrome, Safari, and Firefox, I'd use a cross-browser extension framework like Crossrider.
So go to crossrider.com, set up an account and create a new extension. Then open the background.js file and paste in code like this:
appAPI.ready(function($) {
appAPI.message.addListener({channel: "notifyPageUrl"}, function(msg) {
//Do something, like send an xhr post somewhere
// notifying you of the pageUrl that the user visited.
// The url is contained within msg.pageUrl
});
var opts = { listen: true};
// Note: When defining the callback function, the first parameter is an object that
// contains the page URL, and the second parameter contains the data passed
// to the context of the callback function.
appAPI.webRequest.onBeforeNavigate.addListener(function(details, opaqueData) {
// Where:
// * details.pageUrl is the URL of the tab requesting the page
// * opaqueData is the data passed to the context of the callback function
if(opaqueData.listen){
appAPI.message.toBackground({
msg: details.pageUrl
}, {channel: "notifyPageUrl"});
}
}, opts ); // opts is the opaque parameter that is passed to the callback function
});
Then install the extension! In the example above, nothing is being done with the detected pageUrl that the user is visiting, but you can do whatever you like here - you could send a message to the user, you could restrict access utilizing the cancel or redirectTo return parameters, you could log it locally utilizing the crossrider appAPI.db API or you could send the notification elsewhere, cross-domain, to wherever you like utilizing an XHR request from the background directly.
Hope that helps!
And to answer the question on security issues desktop-side, just note that desktop applications will have the permissions of the user under which they run. So if you are thinking of providing a desktop app that your users will run locally, say something that will detect urls they access by tapping into the network stream using something like winpcap on windows or libpcap on *nix varieties, then just be aware of that - and also that libpcap and friends would have to have access to a network card that can be placed in promiscuous mode in the first place, by the user in question.
the pcap / installed desktop app solutions are pretty invasive - most folks don't want you listening in on literally everything and may actually violate some security policies depending on where your users work - their network administrators may not appreciate you "sniffing", whether that is the actual purpose or not. Security guys can get real spooky so-to-speak on these kinds of topics.
The extension via Crossrider is probably the easiest and least intrusive way of accomplishing your goal if I understand the goal correctly.
One last note, you can get the current tab urls for all tabs using Crossrider's tabs API:
// retrieves the array of tabs
appAPI.tabs.getAllTabs(function(allTabInfo) {
// Display the array
for (var i=0; i<allTabInfo.length; i++) {
console.log(
'tabId: ' + allTabInfo[i].tabId +
' tabUrl: ' + allTabInfo[i].tabUrl
);
}
});
For the tab API, refer to:
http://docs.crossrider.com/#!/api/appAPI.tabs
For the background navigation API:
http://docs.crossrider.com/#!/api/appAPI.webRequest.onBeforeNavigate
And for the messaging:
http://docs.crossrider.com/#!/api/appAPI.message
And for the appAPI.db stuff:
http://docs.crossrider.com/#!/api/appAPI.db
Have you looked into the Scripting Bridge? You could have an app that launches, say, an Applescript which verifies if any of the well known browser is opened and ask them which documents (URL) they are viewing.
Note: It doesn't necessarily need to be an applescript; you can access the Scripting Bridge through cocoa.
It would, however, require the browser to support it. I know Safari supports it but ignore if the others do.
Just as a quick note:
There are ways to do it via AppleScript, and you can easily wrap this code into NSAppleScript calls.
Here's gist with AppleScript commands for Safari and Chrome. Firefox seems to not support AE.
Well obviously this is what I had come across on google.
chrome.tabs.
getSelected
(null,
function
(tab) {
alert
(tab.url);
}) ;
in pure javascript we can use
alert(document.URL);
alert(window.location.href)
function to get current url
I having an issue with FB.ui permissions.request window.
FB.ui({
method: 'permissions.request',
perms: 'publish_actions',
display: 'popup'
},function(response) {
// This function is never called ? });
Context :
I use the new OAuth window (with timeline), i have configured my apps to work with it.
I'm french and use Facebook in French.
First issue :
- My callback function is never called ...
Second issue :
- The new OAuth window, seem to be not the good window.
It's called 'permission request' but inside it is the copy of login window. And no permission request is displayed.
So, my question is : how can i do the permission request in js ?
How displaying this window : https://developers.facebook.com/attachment/app_extended_perms.png/ ?
Thanks.
The reason you are not seeing it is because the application process has become a two step process.
Being that the person accepts to login into your application.
Being the person accept your extended permission which is where the callback url comes into play.
Documentation can be found here.
So the reason your callback isn't being called is because the two step process. I would suggest making the response attached to second page that is called.
I am not sure how the JS SDK works but it is how I managed to do it.
Goodluck.
Disable "Enhanced Auth Dialog" in your app's advance settings and see if it works. If you want to stick with Enhanced Auth Dialog then checkout Setup Auth Dialog Preview for Authenticating user section of this tutorial.
I'm looking for a way through AJAX (not via a JS framework!) to real time monitor a file for changes. If changes where made to that file, I need it to give an alert message. I'm a total AJAX noob, so please be gentle. ;-)
Edit: let me explain the purpose a bit more in detail. I'm using a chat script I've written in PHP for a webhop, and what I want is from an admin module monitor the chat requests. The chats are stored in text files, and if someone starts a chat session a new file is created. If that's the case, in the admin module I want to see that in real time.
Makes sense?
To monitor a file for changes with AJAX you could do something like this.
var previous = "";
setInterval(function() {
var ajax = new XMLHttpRequest();
ajax.onreadystatechange = function() {
if (ajax.readyState == 4) {
if (ajax.responseText != previous) {
alert("file changed!");
previous = ajax.responseText;
}
}
};
ajax.open("POST", "foo.txt", true); //Use POST to avoid caching
ajax.send();
}, 1000);
I just tested it, and it works pretty well, but I still maintain that AJAX is not the way to go here. Comparing file contents will be slow for big files. Also, you mentionned no framework, but you should use one for AJAX, just to handle the cross-browser inconsistencies.
AJAX is just a javascript, so from its definition you do not have any tool to get access to file unless other service calls an js/AJAX to notify about the change.
I've done that from scratch recently.
I don't know how much of a noob you are with PHP (it's the only server script language I know), but I'll try to be as brief as possible, feel free to ask any doubt.
I'm using long polling, which consists in this (
Create a PHP script that checks the content of the file periodically and only responds when it sees any change (it could include a description of the change in the response)
Create your XHR object
Include your notification code as a callback function (it can use the description)
Make the request
The PHP script will start checking the file, but won't reply until there is a change
When it responds, the callback will be called and your notification code will launch
If you don't care about the content of the file, only that it has been changed, you can check the last-modified time instead of the content in the PHP script.
EDIT: from some comment I see there's something to monitor file changes called FAM, that seems to be the way to go for the PHP script