I build my project on Travis CI. I run tests the following way:
Run Firefox with index.html parameter, which loads script that attempts to connect repeatedly to websocket server.
Run simple websocket server that sends commands to Firefox.
Script in Firefox reads these commands (they may contain some JavaScript code to test), executes JavaScript code.
This works when I run things locally. This also used to work on Travis a couple of weeks ago. However, things that worked for years, had broken unexpectedly. Firefox reports no errors, but node.js server receives no incoming connections for some timeout. I don't know the way to debug the problem. Script that runs in Firefox uses console.log extensively, however I can't retrieve these logs from Travis. Is there any way to get some information from Firefox that runs on CI server?
Note: I run Firefox 53. After things got broken, I tried to upgrade to recent version. Also, I used to run following commands before running Firefox:
export DISPLAY=:99.0
sh -e /etc/init.d/xvfb start
sleep 10
And I tried to remove these lines and use headless more, however this didn't work.
Script that runs in Firefox uses console.log extensively, however I can't retrieve these logs from Travis. Is there any way to get some information from Firefox that runs on CI server?
Firefox 65+ supports a new devtools.console.stdout.content about:config preference, which you can set to true to make the Console output be dumped to stdout (and appear in Travis, I believe).
There doesn't seem to be a good solution for earlier versions: Selenium's driver.get_log() doesn't work in Firefox, and other solutions look unsatisfactory to me.
You seem to run index.html as a file:/// URL, in my opinion that's asking for trouble - I recommend spending time to set up a local https:// server, to save time debugging ever-increasing "security" restrictions the browsers add for non-https content.
If the above doesn't help, try reproducing this with a minimal testcase in a separate repo; if the problem persists, you can share that repo in another question.
You can run vnc with xvfb and connect to it with vncviewer. Here you have some more details: https://www.alexkras.com/debugging-xvfb-server-with-vnc/
Related
We have a testing framework made in AutoIt for our Windows apps (older legacy apps that we will continue supporting). These have never been run on a schedule or part of CI (always been run manually). I tried to get some kind of auto run (or even just status reporting) out of the tests, with minimal luck.
I have a VM where the tests can run. I experimented with my own web app, which works okay locally for running and status reporting. But when set up on the server, AutoIt reports it cannot open the application. Same thing happens if I try to run the tests from a .bat file.
My current solution is to have AutoIt call my web app to report status (working okay locally, untested on the server), or to see if I can get AutoIt to report results back to TeamCity. I have the agent installed but when I run the build from TeamCity, AutoIt reports it can't launch the application. I tried this while logged into the VM, logged out, with RDS open; no luck.
Is it possible to run the tests manually from the VM and send results back to TeamCity? When I run them from TeamCity it reads the AutoIt output (which is in the expected format), but I need to let TeamCity know to update the results (so we can use TeamCity rather than my web app to show the results).
I may need to find a way to let TeamCity know a build has been started, which might then let it know to keep an eye on the process' output, but I'm not positive. Any ideas?
I solved this so it could be done more traditionally.
If anyone is confused by what "running the agent from the console" meant, it just means installing the agent without selecting the "as a service" checkbox, and then manually starting the agent by cd'ing to the BuildAgent/bin directory and running the command agent start. I also created a batch file that will do this automatically (but you must run it as admin).
Further, I found AutoIt couldn't do anything if the test doesn't run from the right directory, so I had to devise a solution to this.
The only issue now is that I have to have an old laptop always connected to the virtual machine the tests and app run on (since the AutoIt tests won't work without the VM desktop being interactive).
I've developed an application using Polymer 1.0. My developer computer is a Mac, and I've not had any problems during development process.
However, when I clone my application on a Windows machine (Windows 10), the tests don't work at all.
Whenever I execute polymer test or wct the command blocks the terminal and never ends.
On MACOS or Linux it works perfectly.
The following environment variable values have saved me in Win10 environment:
LAUNCHPAD_BROWSERS=chrome
LAUNCHPAD_CHROME=C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
It is not obvious from Polymer and launchpad documentation the need to set only single browser with known location to minimize the test run time during ongoing development. Of course for multiple browsers autodetection it will take more time and traverse over all PATH with guess on all possible browsers takes forever. It would be reasonable not to rely on auto-detection and list in LAUNCHPAD_BROWSERS only browsers you have set in LAUNCHPAD_xxx location.
I finally found a workaround.
It seems that there is a bug in Web Component Tester. When it tries to find all the browsers installed on a windows machine it takes ages to locate some of them.
To solve this problem, just adds an environment variable that tells WCT which browsers are installed, so it can skip this step:
LAUNCHPAD_BROWSERS= _{coma delimited list of browsers}_
For instance:
LAUNCHPAD_BROWSERS=chrome,firefox,opera
Once this variable has been set, all the tests execute just like on any other OS.
More information about this feature here
I'm running GitLab on Centos 7. GitLab was installed using YUM.
The initial gitlab version was 1.7.12.2.
The problem is with the WEB interface of the GitLab installation.
I'm trying to get the browser session to timeout so that it forces you to login again after a certain period.
I have noticed that a change request was implemented, so I upgraded from 1.7.12.2 to 1.7.14.3 using yum update.
Under the Administration setting (in Web UI) I can now see the extra parameter where you can set the timeout. I have now changed it tt two minutes(for testing so I don't have to wait so long), but it simply does not work.
I have also tried something bigger - 5 minutes - not working.
I have also done a gitlab-ctl stop, then gitlab-ctl reconfigure and then gitlab-ctl start. The new value still shows, but the browser session still does not timeout.
I have also created a new CentOS 7 installation from scratch, installed GitLab via yum with version 1.7.14.3 - this is as-is from the installation - so no previous upgrade problems or similar problems.
I have tried different browsers (FireFox and Chrome) on Windows 7/8, Even Mac. I have also cleared the browser cookies to make sure it gets the latest after the updates. No change in behavior.
Changing the time still has no effect....
Anybody with an idea what I'm doing wrong?
The truth is I think there is a bug in the current session expiration mechanism. See https://gitlab.com/gitlab-org/gitlab-ce/issues/2129
Short answer, your not crazy ;) Hopefully we can resolve this issue shortly.
I have a test cluster that contains a linux machine, an iMac and a windows 7 PC.
The linux machine hosts junit tests that I wrote and the other two machine serve as endpoints for browser automation tests using webDriver.
The script that executes the junit tests loops through different browsers and executes the junit tests against each browser using selenium webDriver. So far, the browsers include iphone, ipad, safari (mac), firefox (mac), chrome (mac), IE10 (win7), firefox (win7), chrome (win7).
While developing this test cluster, I encountered random crashes of webDriver on each of the two endpoints and found it necessary to write a kill/restart of the webDriver jar file. Now, this was a relatively simple matter on the iMac, but on the Windows 7 PC this is proving to be extremely difficult.
The linux machine has a script that checks to see that the webDriver endpoint is available by checking http://windows.Host:4444/wd/hub/status and if it isn't, it shells into powershell on the Windows 7 PC (I have freesshd setup to point to powershell instead of cmd.exe) and runs these commands:
Stop-Process -name java
Start-Process -FilePath C:\webDriver.bat
webDriver.bat contains:
java -jar C:\selenium-server.standalone-2.33.0.jar
Here is the problem I am having:
when powershell restarts webDriver using the above comands, the wedDriver endpoint is reachable but not visible. My tests proceed but fail because the browser is not running in the current desktop but instead some virtual one or another users Desktop. When I run webDriver.bat manually, webDriver runs in a cmd.exe window and the tests execute against all win7 browsers fine, providing webDriver doesn't crash.
Here is my question:
How do I make webDriver execute in such a way that my tests proceed and run correctly, rather than in the background/another user's desktop? These tests are part of Build Verification and need to be run on demand, so having someone manually run webDriver.bat is not an option.
I previously tried to have webDriver's jar running as a service and using samba to restart that service as needed, but ran into the same problem. Powershell seemed to be a better alternative with better control and the ability to verify that the jar file is running, but I don't know if I am heading in the wrong direction here.
I don't relish having to learn powershell to accomplish something that was relatively easy on another OS, but understand that this may be my only option. I also know that the commands I'm using do not constitute a good script and welcome suggestions on how to better achieve my goal here.
Thanks.
Sounds like you just need to pass the host option like so:
java -jar selenium-server-standalone-2.37.0.jar -host 0.0.0.0
Powershell might have permission restrictions on binding to all ports that can be overcome by setting the correct policy. See my blog post here for ideas.
You question is pretty long winded...can you shorten it?
I am currently writing features to upload an image using the file exploer in ie.
Locally this works fine and opens the file explorer and locates the image without any problems. However when i run it as part of the acceptance run ont he server it self it fails to open the file explorer, It will just sit there waiting for it, it doesnt even time out at the usual 60 seconds so i assume something is trying to happen behind the scenes but is failing silently.
Has anyone had this issue and found a fix or work around for it?
Most servers have IE very locked down by default since very little browsing is typically done from servers, and the browser itself represents a significant attack surface. See here for more info http://msdn.microsoft.com/en-us/library/ms537180(v=vs.85).aspx
The result is that unless you disable this enhanced security there are a number of things that just flat out won't work. If you are running your test from the same server where the website is installed, then you will need to disable all the enhanced security stuff on IE.
This would be a violation of best practices for a production system, but is an understandable expedient for a test system as an alternative to having a pair of systems with different OS's (client and server) for your testbed and running the tests on the client. (more realistic, but requires another system or VM be created and maintained)