Ways to programatically check if a website is up and functioning as expected - website-monitoring

I know this is an open ended question, but hopefully it will get some good answers before the thread is locked...
I'm wondering what methods there are to programmatically check (language agnostic) if a website is online from a client perspective (assume you can't make changes to the site/server, but you can rely on certain behaviours of the site.)
The result of each method could stack to provide a measure of certainty that the site is up/down - that is, a method does not have to provide a definite indication if the site is up/down on its own.
Some common tests just to check 'upness' may be:
Ping the site (which in the case of shared hosting isn't very
indicative)
Send a http head/get request and check the status
Others I can think of to check that the site is up and functioning:
Check you received a well formed html response i.e. html to html
tags, if the site is experiencing trouble it may spit an error and
exit without writing the rest of the page (not all that reliable
though because the site may handle most errors in a better way)
Check certain content is or is not on the page, i.e. perhaps there is some content that is always present on your pages, or always present in the case of an error
Can anybody think of any other methods that could be used to help determine if a site is in fact up/down and functioning/not functioning correctly from within a program?

If your get request on a page that displays info from database comes back with status 200 and matching keywords are found, you can be pretty certain that your site is up and running.
And you don't really need to write your own script to do that. There are free services such as GotSiteMonitor, Pingdom, UptimeRobot etc. allows you to monitor your site.

Based your set of test on the unit tests priciple. It is normally used in programming to test classes, modules or other artefacts after changes have been made. You can use any of the available frameworks, so don't have to reinvent the wheel. You must describe (implement) tests to be run, in your case a typical test should request a url inside the page and then do some evaluations like:
call result (for example return code of curl execution)
http return code
http headers
response mime type
response size
response content (test against a regular expression)
This way you can add, remove and modify single tests without having to care about the framework, once you are up. You can also chain tests, so perform a login in one test and virtually click a button in subsequent test.
There are also tools to handle such test runs automatically including visualization of results, statistics and the like.

OK, it sounds like you want to test and monitor your website from a customer experience perspective rather than purely establishing if a server is up (using ping for example). An effective way to replicate the customer experience is to simulate tests against the site using one of the headless browser testing tools (phantomJS is great a great choice) as they will render the page fully (including images, CSS, JS etc.) giving you a real page load time. These tools also allow you to make assertions on all aspects of the HTML content and HTTP response.
pingdom recently started offering a (paid for) service to perform these exact types of checks for alongside their existing monitoring solution. The demo is worth looking at, their interface for writing the actual tests is very nice.

Related

Ruby - Program navigator module

I have a Ruby program that uses a webdriver (Watir) to walk a page and perform tests alongside a BDD suite called RSpec.
I'm trying to optimize it for a slow server by improving its ability to navigate efficiently. Thus far It has been creating a new browser session for each test package, then closing it afterwards. This is very inefficient because it hits the login page again for every instance.
Of course, I don't want to hard-code navigation instructions into the tests because adding new spec files may change the order they are executed in, and not every page of the webapp has the main navigation bar, so navigation may need to change based on the page the last spec left the browser on.
I need some kind of master library or module that will take what page the program is at and what page it wants to go to, then bring the browser to that page so it can begin testing. What is the best way to do this?
I'm not fantastically experienced so I'd love input from more seasoned developers. Should I have each page be a class? Should I just stick with closing browsers after each test packet? Should I manually code brute-force methods (gotoPage1FromPage2)?
Okay, that last one was a joke. Seriously though, what is the best way to do this?
You are exactly correct about the difficulties of maintaining state in your tests. Shutting down a browser between each session is the best way to make sure that you always know the state of the browser at all times for a test. Saucelabs goes so far as to spin up a new virtual machine for each of the tests they run. Ideally you decrease test time by running multiple tests in parallel.
I'm not certain I know what you mean by "test package" or how many times that means you are starting a new browser and logging in, but... another thing to consider investigating is whether you can set a cookie or use oauth to log in without having to use the navigation. I've worked at places that allowed admin logins for their staging environments by passing a parameter in an url.
Your tests should be clear in their intention, which typically means your Page Object implementation does not know about what comes before or after the actions you are taking. You should be able to look at the RSpec code and reproduce exactly what it is testing. Abstracting methods for taking you from one place to another magically in the background is not a good idea.
Best practice used to be having methods from one Page Object return new Page Objects. So users could write methods like this in their tests: LoginPage.new.login.view_account.edit_address. Many of us have been bitten by this approach. Plus it isn't as easy to read as doing something like this:
LoginPage.new.login
HomePage.new.view_account
AccountPage.new.edit_address
This doesn't prevent you from using #visit methods as needed to navigate between Page Objects.

Jmeter exclude URL patterns not work

I was using Jmeter HTTPS Test Script Recorder to record a login request.
Please see the snapshot, I already added the URL patters to exclude the .js files, but I still get the js requests.
Why it's failed?
You can check that if you look at the contents of the said requests. Most likely they are GET requests, and most likely they have one or more Parameters. Regex .*\.js looks specifically for .js at the end of the URL. But if GET request has parameters, on recording its URL would look like <...>.js?param=value, so the regex .*\.js will not match (although the name of the request will still be the same).
So you need to specify 2 regex exclusions: .*\.js and .*\.js?.*
I know that it doesn't answer your question, but actually excluding images and .js files is not something you should be normally doing. I would rather use that field to filter out the "external" URLs, which are not connected to your application like 3rd-party banners, widgets, images, etc. - anything which is not related to your application under test. Even if you see it in response, these entities are being loaded from external sources which you cannot control so they are not interesting and the picture of your load test might be impacted.
So I would suggest the following:
In "Grouping" drop-down choose Store 1st sampler of each trade group only
Make sure that Follow Redirects and Retrieve All Embedded Resources. are turned on in the recorded requests. If not - enabled them via HTTP Request Defaults. Also check Use concurrent pool box is ticked as real browsers download images, styles and scripts in multi-threaded manner.
When it comes to running your test add HTTP Cache Manager to your test plan as well-behaved browsers download images, scripts and styles only once, on subsequent requests they are being returned from browsers cache and this situation needs to be properly simulated
For anyone else arriving here from google looking for an answer to this question:
You may simply be looking at the wrong place.
If you're looking at the workbench results tree, you'll see all requests. They are not filtered here. I've thought this was a bug with JMeter more times than I care to admit.
Instead, look inside the Recording controller tree (which is collapsed by default), where the results are in fact being filtered:

Monitoring AJAX requests between a Flash applet and a server via a Google Chrome extension

I am playing a Flash-only game that uses AJAX to communicate with the server. The problem is that all the data is "drawn" and most of it not copy/pastable, so I end up retyping URLs and similar stuff from parts of it (i.e., from the chat).
I thought I'd make a simple page action extension for Chrome that would intercept all the AJAX communication between the game and the server, the way Developer tools can do it, and display only the data I'm interested in (parsing URLs and similar stuff is a no-brainer).
However, looking around the internet, I've found no info on how to do this. Many sites (including answers to some questions here) mention using Developer Tools (I'd prefer having a page action extension, simple enough to share with other players, but any other automation is welcome as well), some mention chrome.webRequest (which seems to be able to provide only the headers),...
I also thought of making a content script along the lines of this answer, but since I'm trying to read the data between a Flash applet (not a web page) and a server, I don't think injecting a JavaScript code is possible.
So, my question is: can this be done and, if yes, how?
In case anyone got the wrong idea, the aim of this is only to monitor the communication and extract the parts I'd want to be able to copy/paste, not change any data (i.e., the purpose is simplification of the game play, not cheating).

specific limitations of AJAX?

I'm still pretty new to AJAX and javascript, but I'm getting there slowly.
I have a web-based application that relies heavily on mySQL and there are individual user accounts that are accessed and the UI is populated with user specific data.
I'm working on getting rid of a tabbed navigation bar that currently loads new pages because all that changes from page to page is information within one box.
The thing is that box needs to reload info from the database, etc.
I have had great help from users here showing that I need to call the database within the php page that ajax is calling.
OK-so pardon the lengthy intro-what I'm wondering is are there any specific limitations to what ajax can call that I need to know about? IE: someone mentioned that it's best not to call script files and that I should remove scripts from the php page that is being called and keep those in the 'parent' page. Any other things like this I need to keep in mind?
To clarify: I'm not looking to discuss the merits/drawbacks of the technology. I'm wondering about specific coding implementation that I need to be aware of (for example-I didn't until yesterday realize that if even if I had established a mySQL connection on the page, that I would need to re establish that connection in my called page as well...makes perfect sense now).
XMLHttpRequest which powers ajax has a number of limitations. I recommend brushing up on the same origin policy. This is a pivotal rule because it limits where AJAX calls can be made.
First, you can't have Javascript embedded in the HTTP response to an AJAX call. That's a security issue.
No mention of the dynamics of the database, but if the data to be displayed in tabs doesn't have to be real-time, why not cache it server-side?
I find that like any other protocol, Ajax works best in tightly controlled conditions. It wouldn't make much sense for updating nearly the whole page, unless you find that the user experience is improved with an on-page 'loader'. Without going into workarounds, disadvantages will include losing the browser back button / history, issues such as the one your friend mentioned, and also embedded resources and other rich content can suffer as well, and just having an extra layer of complexity to deal with in your app. Don't treat it as magic sauce for your app - make sure every use delivers specific results that benefit your client / audience.
IMHO, it's best to put your client side javascript in a separate page and then import it - neater container. one thing I've faced before is how to call xml back which contains code to run such as more javascript - it's worth checking if this is likely earlier on and avoiding, than having to look at evals.
Mildly interesting.

Screen scraping an ASP.NET web page to retrieve data displayed in the grid view

I am using RUBY to screen scrap a web page (created in asp.net) which uses gridview to display data. I am successfully able to read the data displayed on page-1 of the grid but unable to figure out how I can move to the next page in the grid to read all the data.
Problem is the page number hyperlinks are not normal hyperlinks (with URL) but instead are javascript hyperlink which causes postback to the same page..
An example of the hyperlink:-
6
I recommend using Watir, a ruby library designed for browser testing, if you're already using ruby for processing. For one thing, it gives you a much nicer interface to the DOM elements on the page, and it makes clicking links like this easier:
ie.link(:text, '6').click
Then, of course you have easier methods for navigating the table as well. It's easy enough to automate this process:
1..total_number_of_pages.each do |next_page|
ie.link(:text, next_page).click
# table processing goes here
end
I don't know your use case, but this approach has its advantages and disadvantages. For one thing, it actually runs a browser instance, so if this is something you need to frequently run quietly in the background in completely automated way, this may not be the best approach. On the other hand, if it's ok to launch a browser instance, then you don't have to worry about all that postback nonsense, and you can just click the link as if you were a user.
Watir: http://wtr.rubyforge.org/
You'll need to figure out the actual URL.
Option 1a: Open the page in a browser with good developer support (e.g. firefox with the web development tools) and look through the source to find where _doPostBack is defined. Figure out what URL it's constructing. Note that it might not be in the main page source, but instead in something that the page loads.
Option 1b: Ditto, but have ruby do it. If you're fetching the page with Net:HTTP you've got the tools to find the definition of __doPostBack already (the body as a string, ruby's grep, and the ability to request additional files, such as those in script tags).
Option 2: Monitor the traffic between a browser and the page (e.g. with a logging proxy) to find out what the URL is.
Option 3: Ask the owner of the web page.
Option 4: Guess. This may not be as bad as it sounds (e.g. if the original URL ends with "...?page=1" or something) but in general this is the least likely to work.
Edit (in response to your comment on the other question):
Assuming you're using the Net:HTTP library, you can do a postback by just replacing your get with a post, e.g. my_http.post(my_url) instead of my_http.get(my_url)
Edit (in response to danieltalsky's answer):
watir may be a really good solution for you (I'm kicking myself for not having thought of it), but be aware that you may have to manually fire the event or go through other hoops to get what you want. As a specific gotcha, with any asynchronous fetch like this you need to make sure that the full response has come back before you scrape it; that isn't a problem when you're doing the request inline yourself.
You will have to perform the postback. The data is pass with a form POST back to the server. Like Markus said use something like FireBug or the Developer Tools in IE 8 and fiddler to watch the traffic. But honestly this is a web form using the bloated GridView and you will be in for a fun adventure. ;)
You'll need to do some investigation in order to figure out what HTTP request the javascript execution is performing. I've used the Mozilla browser with the Firebug plugin and also the "Live HTTP Headers" plugin to help determine what is going on. It will likely become clear to you which requests you will need to make in order to traverse to the next page. Make sure you pay attention to any cookies getting set.
I've had really good success using Mechanize for scraping. It wraps all of the HTTP communication, html parsing and searching(using Nokogiri), redirection, and holding onto cookies. But it doesn't know how to execute Javascript, which is why you will need to figure out what http request to perform on your own.

Resources