Go lang test events listener? - events

This question is about Go language testing. As you probably know most of mainstream languages have their own xUnit frameworks. Most of these frameworks have an ability to listen to test run events (e.g. test case started, test case finished, test failed and so on). This is often called test event listener and is mainly used when writing third-party extensions for frameworks.
My question: is there any similar way to attach to standard Go language testing framework events (http://golang.org/pkg/testing/)?

Not out of the box, but it shouldn't be hard to rig up yourself. Any function named init is guaranteed to be run before anything else, and that's true for tests as well.
In your test file:
var listener pkg.EventListener
func init() {
pkg.SetupMyFramework()
listener = pkg.Listener()
}
Then in any test
func TestXxx(t *testing.T) {
listener.Dispatch(pkg.MakeEvent("TestXxx", pkg.TestStarted))
err := DoSomething()
if err != nil {
listener.Dispatch(pkg.MakeEvent("TestXxx", pkg.TestFailed))
t.Fatal("Test failed")
}
listener.Dispatch(pkg.MakeEvent("TestXxx", pkg.TestPassed))
}
You can, of course, extend this however you want (using channels, making wrapper functions around Fatal to make this less verbose, etc), but that's the gist of how you can integrate it.

Not that I know of. It would help to know what you are trying to accomplish such that you need this capability?

go1.10 is going to be released soon, and one new feature is go test -json (release notes https://tip.golang.org/doc/go1.10#test).
Using go test -json you can parse the test output and send it to a third party framework.

Related

How to automate the process of IVR

Could anyone let me know how to automate the process of IVR like to make unit testing on IVR composer without making testing calling every time? Thanks
Kind of depends on what skills you have and what test you want, but here are some options:
Back2Back: probably the quickest and easiest solution is to have 2 copies of your IVR. Make one call the other and (depending on which of the two you want to test) have one be passive (= do nothing until hangup) and let the other run the flow/script you want to test.
Test tool: take something you can script or code to build a call generator tool. Usually this is something like FreeSwitch or if you have code skills, a SIP stack like PJSIP where you build your own sip client. There are also some commercial options in this area, google for something like "SIP Test Tool" and you'll probably find something.
Get creative: I've seen QA engineers abuse standard HTTP web test tool to generate SIP messages. Anything capable of scripted responses (so you can send ACK and such) can work here, but depends on your specific needs. It can be a cheap and easy solution if you know what you're doing and you don't need things like interactive audio.
If I understood the question correctly, you can write a simple app for that testing using Dasha.
Sample DSL (DashaScript) code:
start node root {
do {
#connectSafe("<PHONE_NUMBER>"); //make Dasha call your IVR
}
transitions {
step2: goto step2 on #messageHasIntent("press_one"); //use conversational AI to understand that IVR says "press one to ..."
}
}
node step2 {
do {
#sendDTMF("1"); //make selection by sending DTMF code
}
transitions {
step3: goto step3 on #messageHasIntent("press_two");
}
}
node step3 {
do {
#sendDTMF("2");
}
}
//etc......
So you can design test suites for IVRs and even generate them automatically or make it a part of your CI/CD process.
If you need any help, feel free to join our dev community or drop me a line at vlad#dasha.ai.
Cheers

Clean and generic project structure for GO applications and mongodb [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I want to build an API based application using GO and MongoDB. I'm from Asp.net MVC background. Probably if I make an architecture with MVC web application things to be consider are
Separation of concerns(SoC)
DataModel
BusinessEntities
BusinessServices
Controllers
Dependeny Injection and Unity of Work
Unit Testing
MoQ or nUnit
Integration with UI frameworks
Angularjs or others
RESTful urls that enables SEO
Below architecture could be a solution for my need in MVC based appications
There are resources around the web to build Asp.Net or Java based applications, but I have not find solution to Golang application architecture.
Yes GO is different to C# or Java, but still there are Structs, Interfaces to create reusable code and a generic application architecture.
Consider above points in mind, how we can make a clean and reusable project structure in GO applications and a generic repositories for DB(Mongodb) transactions. Any web resources also a great point to start.
It depends on your own style and rules, in my company, we develop our projects this way:
Configuration is determined by environment variables, so we have a company/envs/project.sh file which has to be evaluated before service (outside the project in the image).
We add a zscripts folder that contains all extra scripts, like adding users or publishing a post. Intended to be used only for debug proposes.
Data models (entities) are in a package called project/models.
All controllers and views (HTML templates) are categorized into "apps" or "modules". We use the REST path as main group delimiter, so path /dogs goes to package project/apps/dogs and /cats to project/apps/cats.
Managers are in their separated package at project's root project/manager.
Static files (.css, .png, .js, etc.) are located at project/static/[app/]. Sometimes is required to have the optional [app/] folder, but it only happens when two apps have dashboards or conflicting file names. Most of cases you won't need to use [app/] for static resources.
Managers
We call a manager, a package that contains pure functions which helps apps to perform its task, for example, databases, cache, S3 storage, etc. We initialize each manager calling package.Startup() before we start to listen, and finalize calling package.Finalize() when program is interrupted.
An example of a manager could be project/cache/cache.go:
type Config struct {
RedisURL string `envconfig:"redis_url"`
}
var config Config
var client *redis.Client
func Startup(c Config) error {
config = c
client, err := redis.Dial(c.RedisURL)
return err
}
func Set(k,v string) error {
return client.Set(k, v)
}
in main.go (or your_thing_test.go):
var spec cache.Config
envconfig.Process("project", &spec)
cache.Startup(spec)
And in a app (or module):
func SetCacheHandler(_ http.ResponseWriter, _ *http.Request){
cache.Set("this", "rocks")
}
Modules
A module is a container of views and controllers that are isolated from other modules, using our configuration I would recommend to not create dependencies between modules. Modules are also called apps.
Each module configures its routes using a router, sub-router or what your framework provides, for example (file project/apps/dogs/configure.go):
func Configure(e *echo.Echo) {
e.Get("/dogs", List)
}
Then, all handlers live in project/apps/dogs/handlers.go:
// List outputs a dog list of all stored specimen.
func List(c *echo.Context) error {
// Note the use of models.Xyz
var res := make([]models.Dog, 0) // A little trick to not return nil.
err := store.FindAll("dogs", nil, &res) // Call manager to find all dogs.
// handle error ...
return c.JSON(200, res) // Output the dogs.
}
Finally you configure the app in main (or in a test):
e := echo.New()
dogs.Configure(e)
// more apps
e.Run(":8080")
Note: for views, you can add them to project/apps/<name>/views folder and configure them the using the same function.
Other
Sometimes we also add a project/constants and a project/utils package.
Here is what it looks like:
Note that in above sample, templates are separated from apps, thats because its a placeholder, directory is empty.
Hope it was useful. Greetings from México :D.
I've also struggled about how to structure my Go web APIs in the past and don't know any web resources that tell you exactly how to write a Go web API.
What I did was just check out other projects on Github and try out how they structured their code, for example, the Docker repo has very idomatic Go code on it's API.
Also, Beego is a RESTful framework that generates the project structure for you in a MVC way and according to their docs, it can also be used for APIs.
I've been building a web APIs in golang for a little while now.
You'll have to do some research but I can give you some starting points:
Building Web Apps with Go -- ebook
github.com/julienschmidt/httprouter -- for routing addresses
github.com/unrolled/render/ -- for rendering various forms of responses(JSON, HTML, etc..)
github.com/dgrijalva/jwt-go -- JSON Web Tokens
www.gorillatoolkit.org/pkg/sessions -- Session Management
And for reference on how some things work together in the end:
Go Web API Repo -- personal project
1. Separation of concerns (SoC)
I haven't worked with SoC directly, but I have my own pattern. You can adapt to whatever pattern (MVC, your own, etc.).
In my code, I separate my code into different packages:
myprojectname (package main) — Holds the very basic setup and configuration/project consts
* handlers (package handlers) — Holds the code that does the raw HTTP work
* models (package models) — Holds the models
* apis (NOT a package)
- redis (package redis) — Holds the code that wraps a `sync.Pool`
- twilio (package twilio) — Example of layer to deal with external API
Notes:
In each package other than main, I have a Setup() function (with relevant arguments) that is called by the main package.
For the packages under the apis folder, they are often just to initialize external Go libraries. You also can directly import existing libraries into the handlers/models without an apis package.
I setup my mux as an exported global in the handlers package like this...
Router := mux.NewRouter()
...and then create a file for each URL (different methods with the same URL are in the same file). In each file, I use Go's init() function, which is ran after global variables are initialized (so it's safe to use the router) but before main() is ran (so it's safe for main to assume everything has been setup). The great thing about init() is that you can have as many of those methods as you want in a single package, so they automatically get ran when the package is imported.
Main then imports myprojectname/handlers and then serves the handlers.Router in main.
2. Dependency Injection and Unity of Work
I haven't worked with Unity of Work, so I have no idea of possible Go implementations.
For DI, I build an interface that both the real object and the mock objects will implement.
In the package, I add this to the root:
var DatabaseController DatabaseControllerInterface = DefaultController
Then, the first line of each test I can change DatabaseController to whatever that test needs. When not testing, the unit tests should not be ran, and it defaults to DefaultController.
3. Unit Testing
Go provides a built in testing with the go test package command. You can use go test --cover to also emit a coverage percent. You can even have coverage displayed in your browser, highlighting the parts that are/aren't covered.
I use the testify/assert package to help me with testing where the standard library falls short:
// something_test.go
//
// The _test signifies it should only be compiled into a test
// Name the file whatever you want, but if it's testing code
// in a single file, I like to do filename_test.go.
package main
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestMath(t *testing.T) {
assert.Equal(t, 3+1, 4)
}
4. Integration with UI frameworks
I haven't seen any for Angular. Although I haven't used it, Go has a good template engine built into the standard lib.
5. RESTful URLs that enables SEO
Again, I can't help you here. It's up to you: send proper status codes (don't send a 200 with a 404 page as you'll get docked for duplicate pages), don't duplicate pages (pay attention to google.com/something vs google.com/something/; hopefully your framework will not mess this up), don't try to trick the search engine, and so on.
To my mind Go webapp project folder on production server can looks like on your picture just much simpler. Nothing special in assets structure - Static, Templates, Content, Styles, Img, JSlibs, DBscripts etc. usual folders. Nothing special in WebAPI - as usual you design which URI will respond required functionality and route requests to handlers accordingly. Some specifics - many gophers don't believe in MVC architecture, it's up to you surely. And you deploy one staticaly linked executable without dependencies. In your development environment you structure yours and imported/vendored sorce files in $GOPATH as in stdlib done but deploy only one executable in production environment, sure with static assets needed. You can see how to orginize Go source packages just in stdlib. Having just one executable what would you structure on production?

How does event-driven programming background code look like?

We always use event driver programming to create desktop applications with user interfaces. In such frameworks we only associate functions to events. But we can never see how does it work in the background. How does the real application generated by the framework look like. What does the main function contain. I imagine something like the following :
int main()
{
while(true)
{
wait_for_an_event();
handle_the_recived_event();
}
}
If you have any clarifications, details about this, I would be so greatuful
Based on your comment about using .NET
Events on .Net are a way to signal notifications to clients of a class. They are also built in and are fairly easy to implement.
You can read about them here: Events Tutorial

Meteor test driven development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I don't see how to do test driven development in meteor.
I don't see it mentioned anywhere in documentation or FAQ. I don't see any examples or anything like that.
I see that some packages are using Tinytest.
I would need response from developers, what is roadmap regarding this. Something along the lines of:
possible, no documentation, figure it out yourself
meteor is not built in a way that you can make testable apps
this is planned feature
etc
Update 3: As of Meteor 1.3, meteor includes a testing guide with step-by-step instructions for unit, integration, acceptance, and load testing.
Update 2: As of November 9th, 2015, Velocity is no longer maintained. Xolv.io is focusing their efforts on Chimp, and the Meteor Development Group must choose an official testing framework.
Update: Velocity is Meteor's official testing solution as of 0.8.1.
Not much has been written about automated testing with Meteor at this time. I expect the Meteor community to evolve testing best-practices before establishing anything in the official documentation. After all, Meteor reached 0.5 this week, and things are still changing rapidly.
The good news: you can use Node.js testing tools with Meteor.
For my Meteor project, I run my unit tests with Mocha using Chai for assertions. If you don't need Chai's full feature set, I recommend using should.js instead. I only have unit tests at the moment, though you can write integration tests with Mocha as well.
Be sure to place your tests in the "tests" folder so that Meteor does not attempt to execute your tests.
Mocha supports CoffeeScript, my choice of scripting language for Meteor projects. Here's a sample Cakefile with tasks for running your Mocha tests. If you are using JS with Meteor, feel free to adapt the commands for a Makefile.
Your Meteor models will need a slight bit of modification to expose themselves to Mocha, and this requires some knowledge of how Node.js works. Think of each Node.js file as being executed within its own scope. Meteor automatically exposes objects in different files to one another, but ordinary Node applications—like Mocha—do not do this. To make our models testable by Mocha, export each Meteor model with the following CoffeeScript pattern:
# Export our class to Node.js when running
# other modules, e.g. our Mocha tests
#
# Place this at the bottom of our Model.coffee
# file after our Model class has been defined.
exports.Model = Model unless Meteor?
...and at the top of your Mocha test, import the model you wish to test:
# Need to use Coffeescript's destructuring to reference
# the object bound in the returned scope
# http://coffeescript.org/#destructuring
{Model} = require '../path/to/model'
With that, you can start writing and running unit tests with your Meteor project!
Hi all checkout laika - the whole new testing framework for meteor
http://arunoda.github.io/laika/
You can test both the server and client at once.
See some laika example here
See here for features
See concept behind laika
See Github Repository
Disclaimer: I'm the author of Laika.
I realize that this question is already answered, but I think this could use some more context, in the form of an additional answer providing said context.
I've been doing some app development with meteor, as well as package development, both by implementing a package for meteor core, as well as for atmosphere.
It sounds like your question might be actually a question in three parts:
How does one run the entire meteor test suite?
How does one write and run tests for individual smart packages?
How does one write and run tests for his own application?
And, it also sounds like there may be a bonus question in there somewhere:
4. How can one implement continuous integration for 1, 2, and 3?
I have been talking and begun collaborating with Naomi Seyfer (#sixolet) on the meteor core team to help get definitive answers to all of these questions into the documentation.
I had submitted an initial pull request addressing 1 and 2 to meteor core: https://github.com/meteor/meteor/pull/573.
I had also recently answered this question:
How do you run the meteor tests?
I think that #Blackcoat has definitively answered 3, above.
As for the bonus, 4, I would suggest using circleci.com at least to do continuous integration for your own apps. They currently support the use case that #Blackcoat had described. I have a project in which I've successfully gotten tests written in coffeescript to run unit tests with mocha, pretty much as #Blackcoat had described.
For continuous integration on meteor core, and smart packages, Naomi Seyfer and I are chatting with the founder of circleci to see if we can get something awesome implemented in the near term.
RTD has now been deprecated and replaced by Velocity, which is the official testing framework for Meteor 1.0. Documentation is still relatively new as Velocity is under heavy development. You can find some more information on the Velocity Github repo, the Velocity Homepage and The Meteor Testing Manual (paid content)
Disclaimer: I'm one of the core team members of Velocity and the author of the book.
Check out RTD, a full testing framework for Meteor here rtd.xolv.io.
It supports Jasmine/Mocha/custom and works with both plain JS and coffee. It includes test coverage too that combines unit/server/client coverage.
And an example project here
A blog to explain unit testing with Meteor here
An e2e acceptance testing approach using Selenium WebdriverJS and Meteor here
Hope that helps. Disclaimer: I am the author of RTD.
I used this page a lot and tried all of the answers, but from my beginner's starting point, I found them quite confusing. Once I had any trouble, I was flummoxed as to how to fix them.
This solution is really simple to get started with, if not fully documented yet, so I recommend it for people like myself who want to do TDD but aren't sure how testing in JavaScript works and which libraries plug into what:
https://github.com/mad-eye/meteor-mocha-web
FYI, I found that I also need to use the router Atmosphere package to make a '/tests' route to run and display the results from the tests, as I didn't want it to clutter my app every time it loads.
About the usage of tinytest, you may want to take a look at those useful ressources:
The basics are explained in this screencast:
https://www.eventedmind.com/feed/meteor-testing-packages-with-tinytest
Once you understood the idea, you'll want the public API documentation for tinytest. For now, the only documentation for that is at the end of the source of the tinytest package: https://github.com/meteor/meteor/tree/devel/packages/tinytest
Also, the screencast talks about test-helpers, you may want to have a look at all the available helpers in here:
https://github.com/meteor/meteor/tree/devel/packages/test-helpers
There often is some documentation inside each file
Digging in the existing tests of meteor's packages will provide a lot of examples. One way of doing this is to make a search for Tinytest. or test. in the package directory of meteor's source code
Testing becomes a core part of Meteor in the upcoming 1.3 release. The initial solution is based on Mocha and Chai.
The original discussions of the minimum viable design can be found here and the details of the first implementation can be found here.
MDG have produced the initial bones of the guide documentation for the testing which can be found here, and there are some example tests here.
This is an example of a publication test from the link above:
it('sends all todos for a public list when logged in', (done) => {
const collector = new PublicationCollector({userId});
collector.collect('Todos.inList', publicList._id, (collections) => {
chai.assert.equal(collections.Todos.length, 3);
done();
});
});
I'm doing functional/integration tests with Meteor + Mocha in the browser. I have something along the lines of the following (in coffeescript for better readability):
On the client...
Meteor.startup ->
Meteor.call 'shouldTest', (err, shouldTest) ->
if err? then throw err
if shouldTest then runTests()
# Dynamically load and run mocha. I factored this out in a separate method so
# that I can (re-)run the tests from the console whenever I like.
# NB: This assumes that you have your mocha/chai scripts in .../public/mocha.
# You can point to a CDN, too.
runTests = ->
$('head').append('<link href="/mocha/mocha.css" rel="stylesheet" />')
$.getScript '/mocha/mocha.js', ->
$.getScript '/mocha/chai.js', ->
$('body').append('<div id="mocha"> </div>')
chai.should() # ... or assert or explain ...
mocha.setup 'bdd'
loadSpecs() # This function contains your actual describe(), etc. calls.
mocha.run()
...and on the server:
Meteor.methods 'shouldTest': -> true unless Meteor.settings.noTests # ... or whatever.
Of course you can do your client-side unit testing in the same way. For integration testing it's nice to have all Meteor infrastructure around, though.
As Blackcout said, Velocity is the official TDD framework for Meteor. But at this moment velocity's webpage doesn't offer good documentation. So I recommend you to watch:
Concept behind velocity
Step by step tutorial
And specially the Official examples
Another option, made easily available since 0.6.0, is to run your entire app out of local smart packages, with a bare minimum amount of code outside of packages to boot your app (possibly invoking a particular smart package that is the foundation of your app).
You can then leverage Meteor's Tinytest, which is great for testing Meteor apps.
Ive successfully been using xolvio:cucumber and velocity to do my testing. Works really well and runs continuously so you can always see that your tests are passing.
Meteor + TheIntern
Somehow I managed to test Meteor application with TheIntern.js.
Though it is as per my need. But still I think it may lead someone to the right direction and I am sharing what I have done to resolve this issue.
There is a execute function which allows us to run JS code thorugh which we can access browsers window object and hence Meteor also.
Want to know more about execute
This is how my test suite looks for Functional Testing
define(function (require) {
var registerSuite = require('intern!object');
var assert = require('intern/chai!assert');
registerSuite({
name: 'index',
'greeting form': function () {
var rem = this.remote;
return this.remote
.get(require.toUrl('localhost:3000'))
.setFindTimeout(5000)
.execute(function() {
console.log("browser window object", window)
return Products.find({}).fetch().length
})
.then(function (text) {
console.log(text)
assert.strictEqual(text, 2,
'Yes I can access Meteor and its Collections');
});
}
});
});
To know more, this is my gist
Note: I am still in very early phase with this solution. I don't know whether I can do complex testing with this or not. But i am pretty much confident about it.
Velocity is not mature yet. I am facing setTimeout issues to use velocity. For server side unit testing you can use this package.
It is faster than velocity. Velocity requires a huge time when I test any spec with a login. With Jasmine code we can test any server side method and publication.

How do you figure out what test will best represent the feature you want to create?

Test driven development on wikipedia says first develop a test that will fail because the feature does not exist. Then build the code to pass the test. What does this test look like?
How do you figure out what test will best represent the feature you want to create?
Can someone give an example?
Like if I make a logout button feature to a web application then would the test be hitting the page looking for the button? or what?
I heard test driven is nice for regression testing, I just don't know how to start integrating it with my work.
Well obviously there are areas that are more suited for TDD than others, and running frontend development is one of the areas that I find difficult to do TDD on. But you can.
You can use WATIN or WebAii to do that kind of test. You could then:
Write a test that checks if a button exists on the page ... fail it, then implement it, and pass
Write a test that clicks the button, and checks for something to change on the frontend, fail it, implement feature and pass the test.
But normally you would test the logic behind the actions that you do. You would test the logout functionality on your authenticationservice, that is called by your eventhandler in webforms, or the controller actions in MVC.
What does this test look like?
A test has 3 parts.
it sets up a context
it performs an action
it makes an assertion that the action did what it was supposed to do
How do you figure out what test will best represent the feature you want to create?
Tests are not based on features (unless you are talking about a high level framework like cucumber), they are based on "units" of code. Typically a unit is a function, and you will write multiple tests to assert all possible behaviors of that function are working correctly.
Can someone give an example?
It really varies based on the framework you use. Personally, my favorite is shoulda, which is an extension to the ruby Test::Unit framework
Here is a shoulda example from the readme. In the case of a BDD framework like this, contextual setup happens in its own block
class UserTest < Test::Unit::TestCase
context "A User instance" do
setup do
#user = User.find(:first)
end
should "return its full name" do
assert_equal 'John Doe', #user.full_name
end
context "with a profile" do
setup do
#user.profile = Profile.find(:first)
end
should "return true when sent #has_profile?" do
assert #user.has_profile?
end
end
end
end
Like if I make a logout button feature to a web application then would the test be hitting the page looking for the button? or what?
There are 3 main types of tests.
First you have unit tests (which is what people usually assume you are talking about when you talk about TDD testing). A unit test tests a single unit of work and nothing else. This means that if your method usually hits a database, you make sure that it doesn't actually hit that database for the duration of the test (using a technique called "mocking").
Next, you have integration tests. An integration test usually involves interaction with the infrastructure, and are more "full stack" testing. So from your top level API, if you have an insert method, you would go through the full insert, and then test the resulting data in the database. Because there is more setup in these sorts of tests, they shouldn't really be run from developer machines (it is better to automate these on your build server)
Finally, you have UI testing. This is the most unreliable, and requires a UI scripting framework like Selenium or Waitr to automate clicking around your UI. Don't go crazy with this sort of testing, because these tests are notoriously fragile (a small change can break them), and they wont catch whole classes of issues anyways (like styling).
the unit test would be calling the logout function and verifying that the expected results occurred (user login record ended, for example)
clicking the logout button would be more like an acceptance test - which is also a good thing to do, and (in my opinion) well within the scope of TDD, but it tests TWO features: the button, and the resulting action
It depends on what platform you are using as to how your tests would appear. TDD is much harder in ASP.NET WebForms than ASP.NET MVC because it's very difficult to mock up the HTTP environment in WebForms to get the expected state of Session, Application, ViewState etc. as opposed to ASP.NET MVC.
A typical test is built around Arrange Act Assert.
// Arrange
... setup needed elements for this atomic test
// Act
... set values and/or call methods
// Assert
... test a single expected outcome
It's very difficult to give deeper examples unless you let us know the platform you plan to code with. Please give us more information.
Say I want to make a function that will add one to a number (really simple example).
First off, write a test that says f(10) == 11, then do one that says f(10) != 10. Then write a function that passes those tests. If you realise the function needs more capabilities, add more tests.
The test would be making sure that when the logout function was executed, the user was successfully logged out. Generally a unit testing framework such as NUnit or MSTest (for .Net stuff) would be used.
Web applications are notoriously hard to unit test because of all the contextual information generally required for the execution of server code on a web server. However, a typical example would mock up that information and call the logout logic, and then verify that the correct result was returned. A loose example is an MVC type test using NUnit and Moq:
[Test]
public void LogoutActionShouldLogTheUserOut()
{
var mockController = new Mock<HomeController>() { CallBase = true };
var result = mockController.Object.Logout() as ViewResult;
Assert.That(result.ViewName == "LogoutSuccess",
"Logout function did not return logout view!");
}
This is a loose example because really it's just testing that the "LogoutSuccess" view was returned, and not that any logout logic was executed. In a real test I would mock an HttpContext and ensure the session was cleared or whatever, but I just copied this ;)
Unit tests would not be testing that a UI element was properly wired up to an event handler. If you wanted to ensure that the whole application was working from top to bottom, this would be called integration testing, and you would use something besides unit tests for this. Tools such as Selenium are commonly used for web integration tests, whereas macro recording programs are often used for desktop applications.

Resources