Unable to automate mouse actions using appium for mac driver - macos

i am automating a mac application using appium. I am not able to go down the in application page. i have tried couple of ways, didn't succeed.
http://appium.io/docs/en/commands/interactions/mouse/moveto/
when i use above option, i am not able find the element itself, but the element is very much available in page.
String xpath=""/AxApplication[#AxTitle='Learning']/AXWindow/AXScrollArea[#AXDescription='My Learning']/AXList[#AXDescription='My Learning']/AXList[#AXDescription='Added by Me']/AXButton[#contains(name,'See All')]";
Actions action = new Actions(adriver);
WebElement elemnt=adriver.findElement(By.xpath(xpath));
action.moveToElement(elemnt, 500, 900);
action.release();
action.perform();
Please suggest.

Related

Is it possible to force fail a recaptcha v2 for testing purposes? (I.e. pretend to be a robot)

I'm implementing an invisible reCAPTCHA as per the instructions in the documentation: reCAPTCHA V2 documentation
I've managed to implement it without any problems. But, what I'd like to know is whether I can simulate being a robot for testing purposes?
Is there a way to force the reCAPTCHA to respond as if it thought I was a robot?
Thanks in advance for any assistance.
In the Dev Tools, open Settings, then Devices, add a custom device with any name and user agent equal to Googlebot/2.1.
Finally, in Device Mode, at the left of the top bar, choose the device (the default is Responsive).
You can test the captcha in https://www.google.com/recaptcha/api2/demo?invisible=true
(This is a demo of the Invisible Recaptcha. You can remove the url invisible parameter to test with the captcha button)
You can use a Chrome Plugin like Modify Headers and Add a user-agent like Googlebot/2.1 (+http://www.google.com/bot.html).
For Firefox, if you don't want to install any add-ons, you can easily manually change the user agent :
Enter about:config into the URL box and hit return;
Search for “useragent” (one word), just to check what is already there;
Create a new string (right-click somewhere in the window) titled (i.e. new
preference) “general.useragent.override”, and with string value
"Googlebot/2.1" (or any other you want to test with).
I tried this with Recaptcha v3, and it indeed returns a score of 0.1
And don't forget to remove this line from about:config when done testing !
I found this method here (it is an Apple OS article, but the Firefox method also works for Windows) : http://osxdaily.com/2013/01/16/change-user-agent-chrome-safari-firefox/
I find that if you click on the reCaptcha logo rather than the text box, it tends to fail.
This is because bots detect clickable hitboxes, and since the checkbox is an image, as well as the "I'm not a robot" text, and bots can't process images as text properly, but they CAN process clickable hitboxes, which the reCaptcha tells them to click, it just doesn't tell them where.
Click as far away from the checkbox as possible while keeping your mouse cursor in the reCaptcha. You will then most likely fail it. ( it will just bring up the thing where you have to identify the pictures).
The pictures are on there because like I said, bots can't process images and recognize things like cars.
yes it is possible to force fail a recaptcha v2 for testing purposes.
there are two ways to do that
First way :
you need to have firefox browser for that just make a simple form request
and then wait for response and after getting response click on refresh button firefox will prompt a box saying that " To display this page, Firefox must send information that will repeat any action (such as a search or order confirmation) that was performed earlier. " then click on "resend"
by doing this browser will send previous " g-recaptcha-response " key and this will fail your recaptcha.
Second way
you can make any simple post request by any application like in linux you can use curl to make post request.
just make sure that you specify all your form filed and also header for request and most important thing POST one field name as " g-recaptcha-response " and give any random value to this field
Just completing the answer of Rafael, follow how to use the plugin
None of proposed answers worked for me. I just wrote a simple Node.js script which opens a browser window with a page. ReCaptcha detects automated browser and shows the challenge. The script is below:
const puppeteer = require('puppeteer');
let testReCaptcha = async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await page.goto('http://yourpage.com');
};
testReCaptcha();
Don't forget to install puppeteer by running npm i puppeteer and change yourpage.com to your page address

Headless browser using jmeter

I tried to use (jp#gc - HtmlUnit Driver Config) to create a headless browser test using jmeter, but I get this error
Response message: com.gargoylesoftware.htmlunit.ScriptException: ReferenceError: "getComputedStyle" is not defined.
I read online and it suggest that jp#gc - HtmlUnit Driver Config doesn't support javascript. Is there a way I can fix this via jmeter? or is there any other option to do headless browser testing. I have linux server as load injector
Update:
I have a webdriver sampler to open google page
WDS.sampleResult.sampleStart() WDS.browser.get('http://google.com')
WDS.sampleResult.sampleEnd()
and have downloaded Phanton JS, but when I run it it doesn't show anything on the report. Should I add any other config?
HtmlUnit do not support very well JS.
I done many tests and used each one and i can say that PHANTOMJS is the best one with good support of all JS/CSS... have a beautiful renderer to have nice screenshots.
by code you can use it like this (you can download it from here http://phantomjs.org/download.html (phantomjs-1.9.8 is very stable)):
Capabilities caps = new DesiredCapabilities();
((DesiredCapabilities) caps).setJavascriptEnabled(true);
((DesiredCapabilities) caps).setCapability("takesScreenshot", true);
((DesiredCapabilities) caps).setCapability(
PhantomJSDriverService.PHANTOMJS_EXECUTABLE_PATH_PROPERTY,
"your custom path\\phantomjs.exe"
);
WebDriver driver = new PhantomJSDriver(caps);
If you want to do that via JMeter GUI, you need to add before your Logic Controller an element JSR223 Sampler JSR223_Sampler
and inside the script panel :
org.openqa.selenium.Capabilities caps = new org.openqa.selenium.remote.DesiredCapabilities();
((org.openqa.selenium.remote.DesiredCapabilities) caps).setJavascriptEnabled(true);
((org.openqa.selenium.remote.DesiredCapabilities) caps).setCapability("takesScreenshot", true);
((org.openqa.selenium.remote.DesiredCapabilities) caps).setCapability(
org.openqa.selenium.phantomjs.PhantomJSDriverService.PHANTOMJS_EXECUTABLE_PATH_PROPERTY,
"your custom path\\phantomjs.exe");
org.openqa.selenium.WebDriver driver = new org.openqa.selenium.phantomjs.PhantomJSDriver(caps);
org.apache.jmeter.threads.JMeterContextService.getContext().getCurrentSampler().getThreadContext()
.getVariables().putObject(com.googlecode.jmeter.plugins.webdriver.config.WebDriverConfig.BROWSER, driver);
Do not hesitate if you need more informations.

Automatic download file from web page

I am looking for a method to download automatically a file from a website.
Currently the process is really manual and heavy.
I go on a webpage, I enter my pass and login.
It opens a pop up, where I have to click a download button to save a .zip file.
Do you have any advice on how I could automate this task ?
I am on windows 7, and I can use mainly MS dos batch, or python. But I am open to other ideas.
You can use selenium web driver to automate the downloading. You can use below snippet for browser download preferences in java.
FirefoxProfile profile = new FirefoxProfile();
profile.setPreference("browser.download.folderList", 2);
profile.setPreference("browser.download.manager.showWhenStarting", false);
profile.setPreference("browser.download.dir", "C:\\downloads");
profile.setPreference("browser.helperApps.neverAsk.openFile","text/csv,application/x-msexcel,application/excel,application/x-excel,application/vnd.ms-excel,text/html,text/plain,application/msword,application/xml");
To handle the popup using this class when popup comes.
Robot robot = new Robot();
robot.keyPress(KeyEvent.VK_DOWN);
robot.keyRelease(KeyEvent.VK_DOWN);
robot.keyPress(KeyEvent.VK_ENTER);
robot.keyRelease(KeyEvent.VK_ENTER);
You'll want to take a look at requests (to fetch the html and the file), Beautifulsoup (to parse the html and find the links)
requests has built in auth: http://docs.python-requests.org/en/latest/
Beautifulsoup is quite easy to use: http://www.crummy.com/software/BeautifulSoup/bs4/doc/
Pseudocode: use request to download the sites html and auth. Go through the links by parsing. If a link meets the criteria -> save in a list, else continue. When all the links have been scrapped, go through them and download the file using requests (req = requests.get('url_to_file_here', auth={'username','password'}), if req.status_code in [200], file = req.text
If you can post the link of the site you want to download from, maybe we can do more.

Best way to control firefox via webdriver

I need to control Firefox browser via webdriver. Note, I'm not trying to control page elements (i.e. find element, click, get text, etc); rather I need access to Firefox's profiler and force gc (i.e. I need firefox's Chrome Authority and sdk). For context, I'm creating a micro benchmark framework, not running a normal webdriver test.
Obviously raw webdriver won't work, so what I've been trying to do is
1) Create a firefox extension/add-on that does what I need: i.e.
var customActions = function() {
console.log('calling customActions.')
// I need to access chrome authority:
var {Cc,Ci,Cu} = require("chrome");
Cc["#mozilla.org/tools/profiler;1"].getService(Ci.nsIProfiler);
Cu.forceGC();
var file = require('sdk/io/file');
// And do some writes:
var textWriter = file.open('a/local/path.txt', 'w');
textWriter.write('sample data');
textWriter.close();
console.log('called customActions.')
};
2) Expose my customActions function to a page:
var mod = require("sdk/page-mod");
var data = require("sdk/self").data;
mod.PageMod({
include: ['*'],
contentScriptFile: data.url("myscript.js"),
onAttach: function(worker) {
worker.port.on('callCustomActions', function() {
customActions();
});
}
});
and in myscript.js:
exportFunction(function() {
self.port.emit('callCustomActions');
}, unsafeWindow, {defineAs: "callCustomActions"});
3) Load the xpi during my webdriver test, and call out to global function callCustomActions
So two questions about this process.
1) This entire process is very roundabout. Is there a better practice for talking to a firefox extension via webdriver?
2) My current solution isn't working well. If I run my extension via cfx run directly (without webdriver) it works as expected. However, neither the sdk nor chrome authority do anything when running via webdriver.
By the way, I know my function is being called because the log line "calling customActions." and "called customActions." both do print.
Maybe there are some firefox preferences that I need to set but haven't?
It may be that you do not need the add-on at all. Mozilla uses Marionette for test automation of Firefox OS amongst other things:
Marionette is an automation driver for Mozilla's Gecko engine. It can
remotely control either the UI or the internal JavaScript of a Gecko
platform, such as Firefox or Firefox OS. It can control both the
chrome (i.e. menus and functions) or the content (the webpage loaded
inside the browsing context), giving a high level of control and
ability to replicate user actions. In addition to performing actions
on the browser, Marionette can also read the properties and attributes
of the DOM.
If this sounds similar to Selenium/WebDriver then you're correct!
Marionette shares much of the same ethos and API as
Selenium/WebDriver, with additional commands to interact with Gecko's
chrome interface. Its goal is to replicate what Selenium does for web
content: to enable the tester to have the ability to send commands to
remotely control a user agent.

Not Able to Take Screenshot on Safari using Grid and RemoteWebDriver

I am trying to get a screenshot from Safari using Grid and RemoteWebDriver. I have tried the following approaches:
Using the code below. It works on all browsers except Safari. I also tried returning a BASE64 string but didn't work.
WebDriver augmentedDriver = new Augmenter().augment(driver);
File source = ((TakesScreenshot)augmentedDriver).getScreenshotAs(OutputType.FILE);
FileUtils.copyFile(source, new File("screenshot.png"));
Exception: org.openqa.selenium.WebDriverException
Using WebDriverBackedSelenium. This throws exception.
a.
Selenium sel = new WebDriverBackedSelenium(driver, driver.getCurrentUrl());
sel.captureScreenshot(filename);
Exception: java.lang.UnsupportedOperationException: captureScreenshot
b.
Selenium sel = new WebDriverBackedSelenium(driver, driver.getCurrentUrl());
sel.captureScreenshotToString();
Exception: java.lang.UnsupportedOperationException: WebDriver does not implement TakeScreenshot
I tried sending the key sequence that takes screen shots in MAC (command+shift+3) using sendKeys(Keys.chord(Keys.COMMAND, Keys.SHIFT, "3")) but Keys.COMMAND is not considered as modifier key so this also didn't work.
After some research I came across the issue below:
http://code.google.com/p/selenium/issues/detail?id=4203
I also saw this revision which is suppose to fix the issue but I am not able to figure out how to implement this
http://code.google.com/p/selenium/source/detail?r=17731
I would really appreciate if I could get some help on this. I am using MAC, Safari 5.1.7 and selenium 2.25.
For future reference: this seems to have been fixed in Selenium 2.26

Resources