I'm trying to prevent my Greasemonkey script from executing within IFRAMEs. I'm doing so by using if (window.top != window.self) return;. As soon as I insert that line right after the metadata header, the error console throws out a "Security Manager vetoed action" indicating this exact line of code. There's no additional information available.
I'm using Firefox 3.6.10 and the latest Greasemonkey extension. Oh, I'm new to user scripts but even after some time looking for an answer I didn't find anything at all.
Greasemonkey has an own API, which allows persistent storage and cross-site HTTP requests. For this reason, scripts are executed in a sandbox and this API cannot abused.
To make your code work, use:
if(usnafeWindow.top != unsafeWindow.self) return;
Please note the unsafe part, you may want to review these pages:
http://wiki.greasespot.net/Security
http://wiki.greasespot.net/Avoid_Common_Pitfalls_in_Greasemonkey
Alternatively, wrap the code in a <script> tag:
(function(f){var d=document,s=d.createElement('script');s.setAttribute('type','application/javascript');s.textContent = '('+f.toString()+')()';(d.body||d.head||d.documentElement).appendChild(s);s.parentNode.removeChild(s)})(function(){
/* code here */
}
Related
I am fairly new with using Selenium in my Ruby script. Basically my script will make a get request to some url and log in. However my script is failing to send the email and log in automatically due to the Google Chrome pop up about insecure content blocked since one of the images on the page is using http and not https.
I was able to run the script successfully months ago however just recently when trying again, it is unable to proceed with logging in so I dont know why it stopped working all of a sudden.
The error that I see in terminal is this. In irb, I can go through each line of code successfully including using Selenium's "send_keys" and "click" to automatically sign in.
[2018-09-26T13:02:55.002527 #14131] INFO -- : [#http://company.com/favicon.ico'. This request has been blocked; the content must be served over HTTPS.">]
web_app.rb:54:in `': Console Errors Found! (Exception)
I tried searching for some solution but have been unsuccessful. There has been some variations of responses to similar problem but not too much luck with getting it to work.
Any feedback on how to fix would be appreciated.
start Chrome manualy and disable the warning - https://www.thewindowsclub.com/disable-insecure-content-warning-chrome
and use the set browser profile, there is my setup:
#BeforeClass
public static void setUpClass() {
System.setProperty("webdriver.chrome.driver", "C:\\Users\\pburgr\\Desktop\\chromedriver\\chromedriver.exe");
ChromeOptions options = new ChromeOptions();
options.addArguments("user-data-dir=C:\\Users\\pburgr\\AppData\\Local\\Google\\Chrome\\User Data");
driver = new ChromeDriver(options);
driver.manage().window().maximize();}
I'm implementing an invisible reCAPTCHA as per the instructions in the documentation: reCAPTCHA V2 documentation
I've managed to implement it without any problems. But, what I'd like to know is whether I can simulate being a robot for testing purposes?
Is there a way to force the reCAPTCHA to respond as if it thought I was a robot?
Thanks in advance for any assistance.
In the Dev Tools, open Settings, then Devices, add a custom device with any name and user agent equal to Googlebot/2.1.
Finally, in Device Mode, at the left of the top bar, choose the device (the default is Responsive).
You can test the captcha in https://www.google.com/recaptcha/api2/demo?invisible=true
(This is a demo of the Invisible Recaptcha. You can remove the url invisible parameter to test with the captcha button)
You can use a Chrome Plugin like Modify Headers and Add a user-agent like Googlebot/2.1 (+http://www.google.com/bot.html).
For Firefox, if you don't want to install any add-ons, you can easily manually change the user agent :
Enter about:config into the URL box and hit return;
Search for “useragent” (one word), just to check what is already there;
Create a new string (right-click somewhere in the window) titled (i.e. new
preference) “general.useragent.override”, and with string value
"Googlebot/2.1" (or any other you want to test with).
I tried this with Recaptcha v3, and it indeed returns a score of 0.1
And don't forget to remove this line from about:config when done testing !
I found this method here (it is an Apple OS article, but the Firefox method also works for Windows) : http://osxdaily.com/2013/01/16/change-user-agent-chrome-safari-firefox/
I find that if you click on the reCaptcha logo rather than the text box, it tends to fail.
This is because bots detect clickable hitboxes, and since the checkbox is an image, as well as the "I'm not a robot" text, and bots can't process images as text properly, but they CAN process clickable hitboxes, which the reCaptcha tells them to click, it just doesn't tell them where.
Click as far away from the checkbox as possible while keeping your mouse cursor in the reCaptcha. You will then most likely fail it. ( it will just bring up the thing where you have to identify the pictures).
The pictures are on there because like I said, bots can't process images and recognize things like cars.
yes it is possible to force fail a recaptcha v2 for testing purposes.
there are two ways to do that
First way :
you need to have firefox browser for that just make a simple form request
and then wait for response and after getting response click on refresh button firefox will prompt a box saying that " To display this page, Firefox must send information that will repeat any action (such as a search or order confirmation) that was performed earlier. " then click on "resend"
by doing this browser will send previous " g-recaptcha-response " key and this will fail your recaptcha.
Second way
you can make any simple post request by any application like in linux you can use curl to make post request.
just make sure that you specify all your form filed and also header for request and most important thing POST one field name as " g-recaptcha-response " and give any random value to this field
Just completing the answer of Rafael, follow how to use the plugin
None of proposed answers worked for me. I just wrote a simple Node.js script which opens a browser window with a page. ReCaptcha detects automated browser and shows the challenge. The script is below:
const puppeteer = require('puppeteer');
let testReCaptcha = async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await page.goto('http://yourpage.com');
};
testReCaptcha();
Don't forget to install puppeteer by running npm i puppeteer and change yourpage.com to your page address
I'm trying to capture and handle every single request a web page, or a plugin in it is about to make.
For example, if you open the console, and enable Net logging, when a HTTP request is about to be sent, console shows it there.
I want to capture every link and call my function even when a video is loaded by flash player (which is logged in console also, if it is http).
Can anyone guide me what I should do, or where I should get started?
Edit: I want to be able to cancel the request and handle it my way if needed.
You can use the Jetpack SDK to get most of what you need, I believe. If you register to system events and listen for http-on-modify-request, you can use the nsIHttpChannel methods to modify the response and request
let { Ci } = require('chrome');
let { on } = require('sdk/system/events');
let { newURI } = require('sdk/url/utils');
on('http-on-modify-request', function ({subject, type, data}) {
if (/google/.test(subject.URI.spec)) {
subject.QueryInterface(Ci.nsIHttpChannel);
subject.redirectTo(newURI('http://mozilla.org'));
}
});
Additional info, "Intercepting Page Loads"
non sdk version and with much much more control and detail:
this allows you too look at the flags so you can only watch LOAD_DOCUMENT_URI which is frames and main window. main window is always LOAD_INITIAL_DOCUMENT_URI
https://github.com/Noitidart/demo-on-http-examine
https://github.com/Noitidart/demo-nsITraceableChannel - in this one you can see the source before it is parsed by the browser
in these examples you see how to get the contentWindow and browserWindow from the subject as well, you can apply this to sdk example, just use the "subject"
also i prefer to use http-on-examine-response, even in sdk version. because otherwise you will see all the pages it redirects FROM, not the final redirect TO. say a url blah.com redirects you to blah.com/1 and then blah.com/2
only blah.com/2 has a document, so on modify you see blah.com and blah.com/1, they will have flags LOAD_REPLACE, typically they redirect right away so the document never shows, if it is a timed redirect you will see the document and will also see LOAD_INITIAL_DOCUMENT_URI flag, im guessing i havent experienced it myself
How can you detect the url that I am browsing in chrome/safari/firefox via cocoa (desktop app)?
As a side but related note, are there any security restrictions when developing a desktop app that the user will be alerted and asked if they want to allow? e.g. if the app accesses their contact information etc.
Looking for a cocoa based solution, not javascript.
I would do this as an extension, and because you would like to target Chrome, Safari, and Firefox, I'd use a cross-browser extension framework like Crossrider.
So go to crossrider.com, set up an account and create a new extension. Then open the background.js file and paste in code like this:
appAPI.ready(function($) {
appAPI.message.addListener({channel: "notifyPageUrl"}, function(msg) {
//Do something, like send an xhr post somewhere
// notifying you of the pageUrl that the user visited.
// The url is contained within msg.pageUrl
});
var opts = { listen: true};
// Note: When defining the callback function, the first parameter is an object that
// contains the page URL, and the second parameter contains the data passed
// to the context of the callback function.
appAPI.webRequest.onBeforeNavigate.addListener(function(details, opaqueData) {
// Where:
// * details.pageUrl is the URL of the tab requesting the page
// * opaqueData is the data passed to the context of the callback function
if(opaqueData.listen){
appAPI.message.toBackground({
msg: details.pageUrl
}, {channel: "notifyPageUrl"});
}
}, opts ); // opts is the opaque parameter that is passed to the callback function
});
Then install the extension! In the example above, nothing is being done with the detected pageUrl that the user is visiting, but you can do whatever you like here - you could send a message to the user, you could restrict access utilizing the cancel or redirectTo return parameters, you could log it locally utilizing the crossrider appAPI.db API or you could send the notification elsewhere, cross-domain, to wherever you like utilizing an XHR request from the background directly.
Hope that helps!
And to answer the question on security issues desktop-side, just note that desktop applications will have the permissions of the user under which they run. So if you are thinking of providing a desktop app that your users will run locally, say something that will detect urls they access by tapping into the network stream using something like winpcap on windows or libpcap on *nix varieties, then just be aware of that - and also that libpcap and friends would have to have access to a network card that can be placed in promiscuous mode in the first place, by the user in question.
the pcap / installed desktop app solutions are pretty invasive - most folks don't want you listening in on literally everything and may actually violate some security policies depending on where your users work - their network administrators may not appreciate you "sniffing", whether that is the actual purpose or not. Security guys can get real spooky so-to-speak on these kinds of topics.
The extension via Crossrider is probably the easiest and least intrusive way of accomplishing your goal if I understand the goal correctly.
One last note, you can get the current tab urls for all tabs using Crossrider's tabs API:
// retrieves the array of tabs
appAPI.tabs.getAllTabs(function(allTabInfo) {
// Display the array
for (var i=0; i<allTabInfo.length; i++) {
console.log(
'tabId: ' + allTabInfo[i].tabId +
' tabUrl: ' + allTabInfo[i].tabUrl
);
}
});
For the tab API, refer to:
http://docs.crossrider.com/#!/api/appAPI.tabs
For the background navigation API:
http://docs.crossrider.com/#!/api/appAPI.webRequest.onBeforeNavigate
And for the messaging:
http://docs.crossrider.com/#!/api/appAPI.message
And for the appAPI.db stuff:
http://docs.crossrider.com/#!/api/appAPI.db
Have you looked into the Scripting Bridge? You could have an app that launches, say, an Applescript which verifies if any of the well known browser is opened and ask them which documents (URL) they are viewing.
Note: It doesn't necessarily need to be an applescript; you can access the Scripting Bridge through cocoa.
It would, however, require the browser to support it. I know Safari supports it but ignore if the others do.
Just as a quick note:
There are ways to do it via AppleScript, and you can easily wrap this code into NSAppleScript calls.
Here's gist with AppleScript commands for Safari and Chrome. Firefox seems to not support AE.
Well obviously this is what I had come across on google.
chrome.tabs.
getSelected
(null,
function
(tab) {
alert
(tab.url);
}) ;
in pure javascript we can use
alert(document.URL);
alert(window.location.href)
function to get current url
What's the proper way for implementing front-end Ajax functionality in ModX Revolution? I like the idea of connectors and processors, but for some reason they are for back-end use only - modConnectorResponse checks if user is logged in and returns 'access denied', if he is not.
Inserting a snippet into resource and calling it by resource URL seems a one-time solution, but that doesn't look right to me.
So how do I get safe Connector-like functionality for front-end?
So, as boundaryfunctions said, it's not possible and ModX developers recommend using a resource with a single snippet included. But for those who despite the will of developers look for Connector-like functionality, there may be a solution made by guess who-- ModX core developer splittingred in Gallery extra. In connector.php, before handleRequest() call, there's a code that fakes authorisation:
if ($_REQUEST['action'] == 'web/phpthumb') {
$version = $modx->getVersionData();
if (version_compare($version['full_version'],'2.1.1-pl') >= 0) {
if ($modx->user->hasSessionContext($modx->context->get('key'))) {
$_SERVER['HTTP_MODAUTH'] = $_SESSION["modx.{$modx->context->get('key')}.user.token"];
} else {
$_SESSION["modx.{$modx->context->get('key')}.user.token"] = 0;
$_SERVER['HTTP_MODAUTH'] = 0;
}
} else {
$_SERVER['HTTP_MODAUTH'] = $modx->site_id;
}
$_REQUEST['HTTP_MODAUTH'] = $_SERVER['HTTP_MODAUTH'];
}
Works for me. Just need to replace first if condition with my own actions.
UPDATE: I forgot to mention that you need to pass &ctx=web parameter with your AJAX request, because default context for connectors is "mgr" and anonymous users will not pass policy check (unless you set access to the "mgr" context for anonymous users).
And also the code from Gallery extra I posted here seems to check some session stuff that for me doesn't work with anonymous front-end users (and works only when I'm logged in to back-end), so I replaced it with the next:
if (in_array($_REQUEST['action'], array('loadMap', 'loadMarkers'))){
$_SESSION["modx.{$modx->context->get('key')}.user.token"] = 1;
$_SERVER['HTTP_MODAUTH'] = $_REQUEST['HTTP_MODAUTH'] = 1;
}
I don't know if this code is 100% safe, but when anonymous user calls it, he doesn't appear to be logged in to Manager, and when admin is logged in and calls the action from back-end, he is not logged off by force. And that looks like enough security for me.
This solution is still portable (i.e. can be embedded into distributable Extra), but security should be researched more seriously for serious projects.
As far as I know, this is not possible in modX at the moment. It has already been discussed on the modx forums and filed as a bug here, but it doesn't look like anybody is working on it.
There are also two possible workarounds in the second link. Personally, I would favour putting the connector functionality into the assets folder to keep the resource tree clean.
There's a more complete explanation of the technique used in Gallery here:
http://www.virtudraft.com/blog/ajaxs-connector-file-using-modxs-main-index.php.html
It allows you to create a connector to run your own processors or a built-in MODX processors without creating a resource.