Selenium Error: no display specified - firefox

I've installed selenium-server-standalone-2.42.2.jar in a debian virtual box
and installed Firefox 29.0
and trying to run the following script with phpunit which is the only file in the directory:
<?php
class TestLogin extends PHPUnit_Extensions_Selenium2TestCase{
public function setUp()
{
$this->setHost('localhost');
$this->setPort(4444);
$this->setBrowser('firefox');
$this->setBrowserUrl('http://debian-vm/phpUnitTutorial');
}
public function testHasLoginForm()
{
$this->url('index.php');
$username = $this->byName('username');
$password = $this->byName('password');
$this->assertEquals('', $username->value());
$this->assertEquals('', $password->value());
}
}
I get the following error:
1) TestLogin::testHasLoginForm
PHPUnit_Extensions_Selenium2TestCase_WebDriverException: Unable to connect to host
127.0.0.1 on port 7055 after 45000 ms. Firefox console output:
Error: no display specified
Error: no display specified
What does this mean?
I've red several threads and apparently I had to do the following which I tried:
1)to type this in the command shell
export PATH=:0;
Result: I got the same error.
2) I've installed vnc4server and getting debian-vm:1 as a application I then set export PATH=debian-vm:1 run it with realvnc and in the viewer (which works) I got the same problem.

You receive this error, because you have not set the DISPLAY variable. Here is a guide how to perform the test on a headless machine.
You have to install Xvfb and a browser first:
apt-get install xvfb
apt-get install firefox-mozilla-build
then start Xvfb:
Xvfb &
set DISPLAY and start Selenium:
export DISPLAY=localhost:0.0
java -jar selenium-server-standalone-2.44.0.jar
and then you will be able to run your tests.

These days setting up headless is as easy as passing an option to the selenium browser driver.
On most environments this can be done by setting the env variable MOZ_HEADLESS before running your tests, i.e try:
export MOZ_HEADLESS=1
Then, rerun your tests and it should run headless.
If you're out of luck, and it doesn't pick up the env var, try enabling the headless support in the driver config. E.g: with phpunit-selenium lib, do this:
Firefox
$this->setDesiredCapabilities(['moz:firefoxOptions'=> ['args' => ['-headless']]]);
Chrome
$this->setDesiredCapabilities(['chromeOptions'=>['args'=>['headless']]]);
See php-webdriver wiki for more selenium options.

Certainly scripting is the way to go, however iterating through all possible DISPLAY values is not as good as using the right DISPLAY value. Also there is no need for xvfb at least in debian/ubuntu. Selenium can be run locally or remotely using a current DISPLAY session variable as long as it is correct. See my post in http://thinkinginsoftware.blogspot.com/2015/02/setting-display-variable-to-avoid-no.html but in short:
# Check current DISPLAY value
$ echo $DISPLAY
:0
# If xclock fails as below the variable is incorrect
$ xclock
No protocol specified
No protocol specified
Error: Can't open display: :0
# Find the correct value for the current user session
$ xauth list|grep `uname -n`
uselenium/unix:10 MIT-MAGIC-COOKIE-1 48531d0fefcd0a9bde13c4b2f5790a72
# Export with correct value
$ export DISPLAY=:10
# Now xclock runs
$ xclock

The following is not the right variable:
$ export PATH=:0;
That defines where to find executables, such as in /bin, /usr/local/bin.
You're working with X11 variants and in that context, :0 refers to DISPLAY localhost:0.
So you probably intended the following:
$ export DISPLAY=:0
But as others have pointed out there needs to actually be an Xserver (virtual or otherwise) at that DISPLAY address. You can't just make up a value and hope it will work.
To find a list of DISPLAYs that your user is authorized to connect to you can use the following, then set your DISPLAY variable according (host:displayNumber, or :displayNumber if on the local host):
$ xauth list

Related

GrumPHP and php-cs-fixer on WSL using wrong version

I'm trying to get GrumPHP to work with a small Laravel 9 project but php-cs-fixer is being pulled from the wrong location and I can't seem to find how to change this.
Error from GrumPHP:
phpcsfixer
==========
PHP needs to be a minimum version of PHP 7.1.0 and maximum version of PHP 7.4.*.
You can fix errors by running the following command:
'/windir/f/wamp64/vendor/bin//php-cs-fixer' '--config=./home/testuser/php-cs-config.php_cs' '--verbose' 'fix'
Seems like an easy fix, so I updated php-cs-fixer and followed the upgrade guide to get to v3. (currently sitting on 3.10). But I can also see that '/windir/f/wamp64/vendor/bin//php-cs-fixer' is not the correct directory for php-cs-fixer, the actual bin folder is located in WSL not the windows directory so I included a GRUMPHP_BIN_DIR in the grumphp yaml but still no luck.
grumphp.yml
grumphp:
environment:
variables:
GRUMPHP_PROJECT_DIR: "."
GRUMPHP_BIN_DIR: "./home/testuser/tools/vendor/bin/"
paths:
- './home/plustime/tools'
tasks:
phpcsfixer:
config: "./home/testuser/php-cs-config.php_cs"
allow_risky: ~
cache_file: ~
rules: []
using_cache: ~
config_contains_finder: true
verbose: true
diff: false
triggered_by: ['php']
I can't seem to find much about this or anything in the docs, so any help would be appreciated.
This ended up coming down to altering how WSL constructs the environment. To get around WSL building windows paths into the Linux distribution.
The answer was found here:
How to remove the Win10's PATH from WSL
Quick run down:
On the WSL instance open the file /etc/wsl.conf with something like
sudo nano /etc/wsl.conf
Add the following lines to the bottom of the file,
[interop]
appendWindowsPath = false
Mine looked like this when it was finished:
# Enable extra metadata options by default
[automount]
enabled = true
root = /windir/
options = "metadata,umask=22,fmask=11"
mountFsTab = false
# Enable DNS – even though these are turned on by default, we'll specify here just to be explicit.
[network]
generateHosts = true
generateResolvConf = true
[interop]
appendWindowsPath = false
Then restart the WSL instance from your windows terminal and restart it.
wsl --shutdown
GrumPHP now using the correct php-cs-fixer.

Unable to run chrome with Testcafe on macOS

I have been trying without success to run tests on chrome using Testcafe in MacOS. I have generated all the certificates required but when launching chrome with testcafe it reports ERR_SSL_VERSION_OR_CIPHER_MISMATCH. Below are the arguments I am passing -
yarn run testcafe --hostname localhost --ssl 'pfx = testingdomain.pfx;rejectunasuthorized=true;--ssl key = testingdomain.key;cert=testingdomain.crt' "chrome --use-fake-ui-for-media-stream --allow-insecure-localhost --allow-running-insecure-content" e2e/testmac.js --live
When I remove loading PFX cert and run below, I am able to get to the webpage, but cant access mic and camera. My command to maximzie browser window also does not work
yarn run testcafe --hostname localhost "chrome --use-fake-ui-for-media-stream --allow-insecure-localhost --allow-running-insecure-content --live" e2e/testmac.js --ssl 'key=testingdomain.key;cert=testingdomain.crt' --live
My simple test -
import { Selector, ClientFunction } from 'testcafe';
fixture`Audio Configuration Combination`.page`http://XXX.XXX.XXXXXX/sandbox/index.html`;
test('Launch SDK,', async (browser) => {
await browser.getCurrentWindow().maximizeWindow().wait(100000);
});
I have problems only on mac. Same setup is working fine on windows. I need to access mic and camera and so passing in "--use-fake-ui-for-media-stream". But I dont see a camera preview. Passing in "--use-fake-device-for-media-stream" loads up fake devices which is something I dont need.
Any help is greatly appreciated.
According to this comment on GitHub, it should be sufficient to use either of the two approaches to mock user media, not necessarily both. If you specify testcafe --hostname localhost, you shouldn't need to specify --ssl at all.
I ran the following test from the GitHub discussion mentioned above on macOS:
mock-media-test.js
fixture `WebRTC`
.page`https://webrtc.github.io/samples/src/content/getusermedia/canvas/`;
test(`test`, async t => t.wait(30000));
I used the following command:
testcafe "chrome --use-fake-ui-for-media-stream" mock-media-test.js --hostname localhost
The test ran as expected, and the page displayed the stream from my camera. The --use-fake-device-for-media-stream flag worked for me as well.

How to check or activate SyntaxHighlight_GeSHi?

During Mediawiki installation I checked the option for SyntaxHighlight, and it created a LocalSettings line,
wfLoadExtension( 'SyntaxHighlight_GeSHi' );
so, I am supposed that it was installed... But no examples for <source lang> tag is running. For instance the example of the Guide page, <syntaxhighlight lang="Python" line='line'> def quickSort(arr): ...</syntaxhighlight> and its variations (with <source> tag) are not working.
How to check if its alive?
How to activate or to complete the installation?
When installed on Linux, GeSHi requires the pygmentize binary to be marked executable (in the {wiki_installed_folder}/extensions/SyntaxHightlight_GeSHi folder) - by default that property might not be set.
Run "chmod +x pygmentize" to mark it executable - make sure to set the other read/write flags appropriately - to avoid any security issues.

Not able to change default download directory for chrome with selenium hub docker and ruby watir

After a few days of searching and experimenting with any of the solutions I could find online, I give up and want to get some help from the community.
Ruby gems (ruby 2.5.1):
watir 6.11.0
selenium-webdriver 3.4.1
Docker:
selenium/node-chrome-debug:3.14
selenium/hub:3.14
My ruby code:
prefs = {
download: {
prompt_for_download: false,
default_directory: download_directory
}
}
browser = Watir::Browser.new(:chrome, url: selenium_hub_url, options: {prefs: prefs})
Our set-up is:
Run a selenium/hub and a selenium/node-chrome-debug. Something that might be different is that we are mounting the /tmp of the base OS as /hosttmp/tmp in the node container
Make the selenium/node-chrome-debug talk to selenium/hub
Make the browser automation talk to the selenium/hub using the code provided above
The problem is that I was never able to set the default download directory. However, all other parts are working correctly. The VNC window shows the browser is working correctly despite the default download directory settings. It is always /home/seluser/Downloads
Things I have tried:
Other people's ideas such as different ways to specify the options and preferences. (e.g. using the Capabilities)
Docker security-related settings such as: --privileged --security-opt apparmor:unconfined --cap-add SYS_ADMIN
On the base OS, chmod 777 for the download_directory. The download_directory, for example, /tmp/tmp.123 on the base OS, which is mounted as /hosttmp/tmp/tmp.123 in the chrome node container, I could see it and make a few read/write operations in this folder inside the container or on the base OS
Tweaks about the interesting ruby symbol/string stuff when creating a Hash object.
Does anyone have more ideas about what could lead to this situation? What else I could try? And is there any log that I could refer to. There is no error or warning when running the code. Thanks in advance.
I'm using Java+Docker+Selenium+Chrome for automation test and also met similar issue with you. Please find my solutions below and try if it works for your case.
Don't set default download directory in the options, just leave "/home/seluser/Downloads" as it is.
When you start up the chrome node on docker, please add the parameter of volume that could transfer the downloaded files to the directory you want.
e.g. docker run -d -p 5900:5900 --link myhub:hub -v :/home/seluser/Downloads selenium/node-chrome-debug:3.14.0
In my case, the JDK environment and my test script is on Linux machine while the selenium webdriver & browser are all on docker, so once the file downloaded by browser it cannot saved directly on Linux machine, you have to mount the local directory with default directory on docker. Then you could find the file saved in the directory you want.
Thanks & Regards!
Jing
Did you define options = Selenium::WebDriver::Chrome::Options.new?
We use
options = Selenium::WebDriver::Chrome::Options.new
prefs = {
prompt_for_download: false,
default_directory: download_directory
}
options.add_preference(download: prefs)
and then you would want something like
browser = Watir::Browser.new(:chrome, url: selenium_hub_url, options: options)
But maybe the main problem is just that you are using
options: {prefs: prefs}
instead of
options: {download: prefs}
Okay, by digging into the source code of the Watir and Selenium-Webdriver, I think I know the 'root cause'.
I have created an issue since I am not sure if this is a bug or a 'feature' The issue
Also, I have a workaround for my case, in watir/capabilities.rb:
Change
#selenium_browser = browser == :remote || options[:url] ? :remote : browser
to
#selenium_browser = browser == :remote ? :remote : browser
This shouldn't be the final solution as it might not be a good idea. Will wait for what the Watir people say about this.

How do I run my selenium ruby scripts on different browsers?

I have written my script specifically for FF. Now I wish to run that same script on Chrome as well as IE. Also I want to run my tests in following order :
Open browser1.
Run script on that browser.
Close browser1.
Open browser2.
Run script on that browser.
Close browser2.
Please help.
In order to run your tests on :
1.Chrome : you will need to install latest Chrome driver, unzip it and paste its path in the environment variables.
2.IE : you will need to install IEDriver server unzip it and paste its path in the environment variables and enable protected mode for each zone in following way (Internet options->security tab->enable protected mode checkbox).
For running your tests as per the way you mentioned, not sure what framework you're using or whatever, but you can do this with a loop. You can do the following :
def all_browsers
browsers = [:firefox,:chrome,:ie].each do |br|
$driver = Selenium::WebDriver.for br
$driver.manage.window.maximize
$driver.navigate.to("http://google.com")
end
$driver.quit
end

Resources