There is a complex testing suite with Capybara, Selenium, Allure and it's all abused:
it launches and exits Chrome on its own, not leaving this job to Capybara
before and after hooks are put in wrong places, causing such effects as, for example, filling remote DB when you do just rspec --dry-run
Allure reports gem is pretty much abandoned and I believe is integrated here also not without wrong practices
Now when I run tests sometimes they hang, so I press ^C but they don't stop:
RSpec is shutting down and will print the summary report... Interrupt again to force quit.
It does not stop no matter how long I wait and even when I close the browser manually.
When I press ^C again it prints nothing -- no backtrace.
How do I know where it hangs? How do I get a backtrace of it at any moment?
I have a Pi 3 that I'd like to setup once and do zero config/maintenance on it thereafter. So far I have the working program, a script setup to run automatically on boot (to handle power disconnects/etc.), and now I would like to automate internet connection (to handle wifi disconnects, etc.)
The wifi chip is on the pi 3, however to get internet connectivity you have to open a browser and accept terms/conditions every time you reconnect. I am wondering if there is a way within ruby coding to basically check for internet connection, and if there is no connection then to open the browser, click accept, then check again and continue.
In ruby, there are two main browser automation frameworks:
Capybara and Watir.
Both were initially created for testing your own apps, but can also be used in normal code.
To my experience, Watir tends to be faster and is more object oriented, therefore I use it when I'm writing some larger drivers, however Capybara is easier to script and better to read.
I'm using Google Cloud Platform and have a virtual machine. I'm also messing around with webscrapers.
I'm currently trying to do a simple scrape of reddit using a ruby script. That part works pretty well. It essentially continues down and down (to the end of reddit!) scrape the articles, though this, obviously takes some time.
Right now, in order to scrape (I'm running ruby scrape.rb > reddit.txt) I have to keep the google virtual machine ssh browser window open on my computer or the process will exit (which makes enough sense). However, what I'd like to do is have the process persist even if I close the window.
Is there a way to somehow have this process continue to run? Then I can periodically log in and check reddit.txt which will continue to grow even I'm not ssh'ed in.
Thanks!
This seems to be the answer.
The command nohup is exactly what I was looking for!
I am having selenium troubles. I will do my best to explain the setup I have and what I'm trying to do.
The short version:
Running my automated tests locally using a ruby webdriver gem works fine. Running the exact same script through the selenium-server standalone jar (whether remotely or locally) does not work without strange alteration in the code.
Is there a way to get a version of the selenium server standalone jar that behaves the same way as the webdriver client libraries?
Or is there some remote ruby gem version of the selenium server so that however the gemified version of the selenium stuff can work remotely?
Basic summary: "I feel like I am in Hooters and asking for my meal to go. I am missing something very fundamental here."
What I'm trying to do:
I am trying to automate testing a web application. The idea is to be able to load the company's website and then interact with the page elements like a real user would (click a link, put text into a text box, select radio buttons and checkboxes, etc). I developed these tests using ruby 1.9.3 and the selenium-webdriver gem for ruby on a mac (10.8.2). I created a ruby wrapper library myself, which I have called "WebAutomation.rb" in which I created my own methods in order to click on elements. An example of one of the wrapper methods I made is as follows:
def WebAutomation.click_element_by_attribute(attribute_name, attribute_value, tag, contains=true)
element = WebAutomation.find_element_by_attribute(attribute_name, attribute_value, tag, contains)
#log.debug("Element returned: #{element}")
raise "Could not click element with #{attribute_name} attribute of #{attribute_value}" unless element.click
end
And the WebAutomation.find_element_by_attribute calls another methods that looks through all the elements that I give it. As another layer of abstraction, I am not running the ruby code directly, I am running it through cucumber scripts That's not my problem. This code all works locally - by that I mean when the browser being driven automatically is local to the code being run.
However, I want to be all fancy-like and not have to run the code locally because I'm on a mac, and let's say I want to do cross-browser testing like run it on IE. So I have both a remote mac and a remote windows laptop with the goal of running this through some system like Jenkins where the Jenkins box would be able to tell these remote machines to run browser tests. I'm not that far yet where I'm worried about Jenkins. I'm just trying to get the remote versions of the tests to pass.
My troubles:
Running my cucumber/ruby scripts locally work great. They're awesome, and I thought I was the man. Then I ran it to the remote mac using the same browser (chrome), and everything went to pot.
Here's what I'm doing:
On the remote mac laptop, I downloaded the selenium-server standalone jar and started it like so:
java -jar selenium-server-standalone-2.31.0.jar
It looks happy to me:
Mar 14, 2013 8:00:06 AM org.openqa.grid.selenium.GridLauncher main
INFO: Launching a standalone server
08:00:11.606 INFO - Java: Oracle Corporation 23.6-b04
08:00:11.608 INFO - OS: Mac OS X 10.8.2 x86_64
08:00:11.616 INFO - v2.31.0, with Core v2.31.0. Built from revision 1bd294d
08:00:11.728 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub
08:00:11.729 INFO - Version Jetty/5.1.x
08:00:11.730 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
08:00:11.730 INFO - Started HttpContext[/selenium-server,/selenium-server]
08:00:11.731 INFO - Started HttpContext[/,/]
08:00:11.744 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#4f8429d6
08:00:11.744 INFO - Started HttpContext[/wd,/wd]
08:00:11.747 INFO - Started SocketListener on 0.0.0.0:4444
08:00:11.747 INFO - Started org.openqa.jetty.jetty.Server#4dfbca86
So then I run my cucumber scripts, passing it some command line arguments that tell it to go to the IP of the remote mac laptop and what browser to run and what environment I want the browser to go to for our application (this isn't important).
It looks like this:
cucumber REMOTE_URL=http://10.110.10.233:4444/wd/hub BROWSER=chrome JJ_ENV=staging features/jabberjaw/contact_us.feature:3
I have code that when a REMOTE_URL parameter is passed that a remote browser is called instead of a local one. The code that gets executed is below:
#This is the code that runs for a local browser
def WebAutomation.set_browser(browser)
#log.debug("Starting browser: #{browser}")
#driver = Selenium::WebDriver.for(browser)
end
#If the remote url is passed in, then I make a remote browser
def WebAutomation.set_remote_browser(url, browser)
#log.debug("Starting remote browser: #{browser} at #{url}")
#driver = Selenium::WebDriver.for(:remote, :url => url, :desired_capabilities => browser)
end
The browser window pops up on the remote machine and it goes to the correct url and logs in. However when running remotely, the selenium-server seems to have issues in clicking elements that are not visibly on the screen. I fixed that (sort of) with a
#driver.action.move_to(element, 100, 100).perform
I had to add the 100, 100 offset because even moving seem to only get to the top left corner and the element still wasn't on the screen. The other thing is that you know that exception I raise unless element.click (code above)? Yeah, that triggers regardless of whether the click really happened or not because for some reason the remote version (selenium-server) returns nil on the element.click whether it succeeds or not. When I run the same code locally (where it uses the webdriver gem), it gives me an {} when successful and nil when it's not. So to get this code to work remotely on chrome I had to do the following:
def WebAutomation.click_element_by_attribute(attribute_name, attribute_value, tag, contains=true)
element = WebAutomation.find_element_by_attribute(attribute_name, attribute_value, tag, contains)
#driver.action.move_to(element, 100, 100).perform
#log.debug("Element returned: #{element}")
element.click
end
In essence, explicitly moving to the element found, and just trusting the click works. Yes, with cucumber I do have a thin layer of protection in that the next step in the script should be a Then step that checks whether whatever action the click was supposed to do succeeded, but it feels wrong that I have to take out that exception and potentially open myself up to false positives.
And even this altered code fails completely on a remote version of firefox (I had to move down to firefox 18, since anything above 18 seems to not work with selenium webdriver - even locally - it just opens the browser window and does nothing else). On firefox, the browser window comes up, navigates to the url, logs in (up to this point, it's like chrome), but then it just gives me the finger and says "MoveElementTargetOutOfBoundsException".
I have also tried taking out the "remoteness" and tried running the selenium-server jar locally and running my tests on a local browser but through the selenium-server jar. Like a so!
cucumber REMOTE_URL=http://localhost:4444/wd/hub BROWSER=chrome JJ_ENV=staging features/jabberjaw/contact_us.feature:3
And I get the same results, so I'm pretty convinced that my problem is that the selenium-server jar is interpreting my scripts utterly differently than my selenium webdriver gem. I cannot be the first person to have run into this, but I have googled until my eyes bled and cannot find a solution to this.
There has to be some way that the client code that runs locally is interpreted the same way remotely, yeah? This can't be a new problem, because if I have to create weird custom code for whether I am running locally or whether it's remote and whether it's chrome and whether it's firefox then this whole "automation is robust and awesome because you can do cross-browser testing and scale across environments" is some mayo-mustard packed cheesecake. That creme filling? Do not want.
So it looks like I was able to finagle my way to a partial solution.
Apparently just starting the selenium server stand alone jar was not enough. The following got me a little farther.
On the server system (the remote system that will be opening and interacting with the actual browser):
java -jar selenium-server-standalone-2.31.0.jar -role hub
Then in a new terminal window (still server system):
java -jar selenium-server-standalone-2.31.0.jar -role node http://localhost:4444/grid/register
Then on the client system (the one with the actual cucumber and webdriver scripts)
cucumber REMOTE_URL=http://10.110.10.233:4444/wd/hub BROWSER=firefox JJ_ENV=staging
**Note, if you're following along at home, this command will be different since I coded my scripts to accept the REMOTE_URL and BROWSER variables to map to a remote webdriver call.
In any case, this allowed the original code that didn't have the explicit move method to work. I still had to eliminate my raise condition because clicks are still giving me nil whether successful or not when going remotely. Firefox also no longer throws up the MoveElementTargetOutOfBoundsException.
The tests still seem much more fragile than when run locally, but it's a least some progress. If anyone has any information about why my clicks always give me "nil" when run remotely whether successful or not, I'd appreciate it. If anyone also has any other information about why adding these role paramemters (and perhaps the registration) seems to stabilize things compared to just the regular java -jar seleneium-server-standalone-2.31.0.jar command that almost all the tutorials I read told me to do, I'd be interested in this.
However, like a big bowl of fiber, these commands have at least gotten me unblocked. Nothing worse than major blockage. I hope if anyone else is having similar troubles that this helps you as well.
I have gotten myself unblocked on windows remote testing as well. Apparently for that, I needed to get the chromedriver.exe and ieserverdriver.exe and put those on the server system. On windows, you still need to start the hub, but when starting the node, you'll need to add the following parameters:
-Dwebdriver.chrome.driver=<path_to_chromedriver.exe> -Dwebdriver.ie.driver=<path_to_ieserverdriver.exe>
And then from my client machine, I had to use port 5555 instead of 4444.
I have a GUI Ruby tool that needs to spawn a child command-line process, for example ping. If i do this on Windows, the console window will appear and dissapear for console process, that is very annoying. Is it possible to start a process from GUI Ruby script with no console window visible? If i use backtick operator or Kernel#system, the console window will appear, see example below:
require 'Tk'
require 'thread'
Thread.new { `ping 8.8.8.8` }
TkRoot.new.mainloop
The issue is that every executable on Windows is defined to be either a GUI executable or a Console executable (well, there's more detail than that but it doesn't matter here) at the time it is built. The executable that's running your Ruby script is a GUI executable (it also happens to use Tk to actually build a GUI, even if only a very simple one in your screenshot) and the ping executable is a Console executable. If a GUI executable starts a Console executable, a console is automatically created to run the executable in; you can't change this.
Of course, the picture is more complex than that. That's because a console application can actually work with the GUI (it just needs to do the right API calls) and you can use a whole catalogue of tricks to cause the console window to stay out of the way (such as starting ping through an appropriately-configured shortcut file) but such things are rather awkward. The easiest way is to have the console window be there the whole time by making Ruby itself be a console app (through naming your script with the .rb suffix, not .rbw). Yes, it doesn't really get rid of the problem, but it stops any annoying flashing.
If you were using ping as the purpose of your app (i.e., to find out if services were up) then I'd as whether it is possible/advisable to switch to writing the checking code directly in Ruby by connecting to the service instead of pinging it, as ping just measures whether the target OS kernel is alive, and not the service executable. This is a fine distinction, but I've seen machines get into a state where no executables were running but the machine was still responding to pings; this was very strange and can totally break your mental abstractions but can happen. But since you're only using ping as an example, I think you can just focus on the (rather problematic) console handling. Still, if you can do it without running a subprocess then definitely choose that method (on Windows; if you were on any sort of Unix you wouldn't have this problem at all).
It is indeed possible to spawn processes with Ruby. Here is a couple of ways to do it. I am not sure what you mean with
the console window will appear and dissapear for console process
but I think the best way for you to do it is to simply grab out and err and show it to your user in your own window. If you want the native windows console to appear wou probably need to something fancy with windows scripting.
One way to keep a spawned console alive is to have it run a batch file with a PAUSE command at the end:
rungping.bat:
ping %1
pause
exit
In your ruby file:
Thread.new {`start runping.bat 8.8.8.8`}