How to use rspec to test screen scraping? - ruby

I'm writing a site that is going to rely a lot on screen scraping. Because I know screen scraping is prone to breaking I'd like to get notified somehow that there is a problem.
The solution that I think will work is to write an rspec test for each site I want to support. The test will open a few remote pages from each site and compare them with the output I expect from my scraper. I'd like to also run the same tests on locally cached copies so I know if my code changes broke the scraper or if the remote site changed. I'd like to somehow run these tests once a day and notify me of any problems.
Eventually I'd like to make this a gem because it's a reoccurring problem for me. I tend to do a lot of scraping and it would be nice to know when things break.
So my problem is I'm relatively new to writing tests for my code and I have no clue what the best way to set this up is.

Take a look at the VCR gem, which will let you get local copies of various pages you want to test, while having the ability to refresh them every so often, as well as testing against live pages.

Related

How do you write UI tests for a command line app in Xcode?

I'm making a small Zork-type game, and I don't want to have to type my way through the whole thing to test the game play every time I change something. Is there a way to use UI testing to do it for me?
I've tried looking around, but everyone just talks about running UI tests from the command line. But, I'd like to know how to do it for a console app.
Now I don't know what your codebase looks like, but your best bet is probably to create test files that run the logic you want to test. You may want to make all logic input independent so that input can be passed by either your tests or a user.

How do I optionally slow down cucumber (ruby) tests?

I have tried googling, but had no luck. Maybe I just don't know the terms to search for...
Recently we made some changes to the environment, and now tests that used to run without issue kill the service. We are working to find out why that is happening... but in the meantime, is there a way I could pass a CLI command or something to slow down the tests on demand? (or vice versa, run at full speed on demand) Or maybe build something into a rake task?
I know I could easily add an after hook to sleep between scenarios, but I want to be able to run the tests full blast as well while we are trying to sort out the issue. Adding an after hook would require editing several files every time we wanted to turn the throttling on or off.
UPDATE:
decided to try adding this to env.rb and I think it might work, although it feels crude. If you have other suggestions I would love to hear them. This is just a temporary fix though, once we figure out what is up with the environment we do need to go back and add a more elegant way of slowing tests down when needed, perhaps through the http client.
After do
if ENV['SLOW'].eql? 'yes'
sleep(3)
#logger.info '******* Waiting 3 seconds before running next scenario *******'
end
end

How to run Jasmine on multiple pages?

A lot of examples out there focus on writing tests and pretty much everything is closed in one single html file.
I'm more interested in how to actually run Jasmine within real-life app (which is not a SPA).
For the first test I included my SpecRunner.html at the very end of my app's framework:
include PATH_TOOLS.'/tests/jasmine/SpecRunner.html';
and this works, wherever I go in my app I have the test results at the bottom of the page. Obviously this kind of mixing test code with framework's code is not the cleanest approach - every time I push my commits to repository I'd have to remove this line.
On the other hand, If I open SpecRunner.html directly I cannot navigate to my app from there, unless it would be opened in iframe, but is it a common practice? I doubt. I know I can always run Jasmine in terminal, but I would prefer to see the results beside my app.
Perhaps I can somehow run Jasmine from command line and force it to open my app in a real browser, like Selenium does?

How to test the newly added functionality in a Joomla 2.5 site which is live

I am working on a dynamic site on Joomla!, most of the coding is done in Juni module and component.
I have some dynamic features which I want to test it on my already published site, I fear that if any thing goes wrong by attaching it to the published site.
I want to ask is there any modules of plugins for Joomla! which allows me to test my dynamic functionality on the published site, and Is there any extension to recover my site to previous state(like version control system of my site...)
Where is a quick checklist with what you can do:
The easiest way to play around is to install Joomla! on your own computer where you can test everything without any worries
To get your websites like "versioned" or to have a complete backup, the most used and trusted solution is AkeebaBackup.
My advice would be NOT to play directly with the live website without doing a backup, especially if you are "testing" stuff.
I actually think it makes more sense to test first on a copy that is in the exact same server environment as the live site.
How to test live is always a hard thing, sooner or later you have to do it, but you want to get as much testing done as you can without doing it. Depending on how the feature is being rendered you may be able to use acl to prevent it being rendered to normal users.

Best way to test a Delphi application

I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI.
My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work.
I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done.
But I have a few questions before I do anything drastic... (and before purchasing anything)
Is it worth it?
Would this be a good way to test?
The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)?
I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp.
Is there a way in testcomplete to specify command-line parameters for an exe?
Does anybody have any similar experiences.
I would suggest you plan to use both DUnit and something like TestComplete, as they each serve a different purpose.
DUnit is great for Unit Testing, but is difficult to use for overall application testing and UI testing.
TestComplete is one of the few automated testing products that actually has support for Delphi, and our QA Engineer tells me that their support is very good.
Be aware though that setting up automated testing is a large and time-consuming job. If you rigourously apply unit testing and auomated UI testing, you could easily end up with more test code than production code.
With a large (existing) application you're in a difficult situation with regards to implementing automated testing.
My recommendation is to set up Unit Testing first, in conjunction with an automated build server. Every time someone checks anything in to source control, the Unit Tests get run automatically. DO NOT try to set up unit tests for everything straight up - it's just too big an effort for an existing application. Just remember to create unit tests whenever you are adding new functionality, and whenever you are about to make changes. I also strongly suggest that whenever a bug is reported that you create a unit test that reproduces the bug BEFORE you fix it.
I'm in a similar situation. (Large app with lots of dependencies). There is almost no automated testing. But there is a big wish to fix this issue. And that's why we are going to tackle some of the problems with each new release.
We are about to release the first version of the new product. And the first signs are good. But it was a lot of work. So next release we sure need some way to automate the test process. That's why I'm already introducing unit tests. Although due to the dependencies, these are no real unit tests, but you have to start somewhere.
Things we have done:
Introduced a more OO approach, because a big part the code was still procedural.
Moved stuff between files.
Eliminated dependencies where possible.
But there is far more on the list of requirements, ensuring enough work for the entire team until retirement.
And maybe I'm a bit strange, but cleaning up code can be fun. Refactoring without unit tests is a dangerous undertaking, especially if there are a lot of side effects. We used pair programming to avoid stupid mistakes. And lots of test sessions. But in the end we have cleaner code, and the amount of new bugs introduced was extremely low.
Oh, and be sure that you know this is an expensive process. It takes lots of time. And you have to fight the tendency to tackle more than one problem in a row.
I can't answer everything, as I've never used testcomplete, but I can answer some of those.
1 - Yes. Regression testing is worth it. It's quite embarrassing to you as a developer when the client comes back to you when you've broken something that used to work. Always a good idea to make sure everything that used to work, still does.
4 - Oracle has something called Flashback which lets you create a restore point in the database. After you've done your testing you can just jump back to this restore point. You can write scripts to use it too, FLASHBACK DATABASE TO TIMESTAMP (FEB-12-2009, 00:00:00);, etc
We're looking at using VMWare to isolate some of our testing.
You can start from a saved snapshot, so you always have a consistent environment and local database state.
VMWare actions can be scripted, so you can automatically install your latest build from a network location, launch your tests and shut down afterwards.
Is it worth it?
Probably. Setting up and maintaining tests can be a big job, but when you have them, tests can be executed very easily and consistently. If your project is evolving, some kind of test suite is very helpful.
Would this be a good way to test?
I would say that proper DUnit test suite is better first step. However if you have large codebase which is not engineered for testing, setting up functional tests is even bigger pain than setting up GUI tests.
The result of the test should in my database (Oracle), is there an easy
way in testcomplete to check these values (multiple fields in multiple tables)?
TestComplete has ADO and BDE interface. Or you can use OLE interface in VBScript to access everything that's available.
Is there a way in testcomplete to specify command-line parameters for
an exe?
Yes.
One way to introduse unittesting in an (old) application could be to have a "Start database" (like the "Flashback" feature described by Rich Adams).
The program som unittest using DUnit to control the GUI.
Se the "GUI testing with DUnit" on http://delphixtreme.com/wordpress/?p=181
Every time the test is started by restoring to "Start database", because, then a known set of data, can be used.
I would need to setup a test database to do all the automated
testing, would there be an easy way to
automate re-setting the test db?
Use transactions: perform a rollback when the test completed. This should revert everything to the initial state.
Recommended reading:
http://xunitpatterns.com/

Resources