Can you implement TDD in Pega? - tdd

As the title says, we use Pega extensively, and was wondering, whether it is possible to implement TDD in the same fashion as .NET or Java.

It depends on version of Pega platform you are using.
Prior to Pega 7.2.2 test cases used to be created through running a Rule and recording Clipboard state before and after Rule run. Initial state recorded was used to setup environment for every test case run, final state recorded was assumed as expected reference state to get after each run. There was no convenient way to configure this.
As so, it was impossible to implement TDD using built-in Pega test case capabilities, because you had to implement your rule completely before creating a test case for it.
In Pega 7.2.2 you can manage the way environment is set up for a test case run and assertions made. But be aware that Pega test cases still lack rule dependency isolation, thus you cannot test a Rule in isolation.
We are using Pega extensively as well, so given aforementioned restrictions we decided to create our own testing framework for Pega.
The problem of unit testing Pega applications I've described in more details in the following article.
https://www.linkedin.com/pulse/gaining-confidence-comprehensive-continuous-pega-7-unit-lutay

Related

Does implementing CI/CD require prerequisite steps?

I'm trying to understand CI/CD strategy.
Many CI/CD articles mention that it's a automation services of build, test, deploy phase.
I would to know does CI/CD concept have any prerequisites step(s)?
For example, if I make a simple tool that automatically builds and deploys, but test step is manual - can this be considered CI/CD?
There's a minor point of minutia that should be mentioned first: the "D" in "CI/CD" can either mean "Delivery" or "Deployment". For the sake of this question, we'll accept the two terms as relatively interchangeable -- but be aware that others may apply a more narrow definition, which may be slightly different depending on which "D" you mean, specifically. For additional context, see: Continuous Integration vs. Continuous Delivery vs. Continuous Deployment
For example, if I make a simple tool that automatically builds and deploys, but test step is manual - can this be considered CI/CD?
Let's break this down. Beforehand, let's establish what can be considered "CI/CD". Easy enough: if your (automated) process is practicing both CI (continuous integration) and CD (continuous deployment), then we can consider the solution as being some form of "CI/CD".
We'll need some definitions for CI and CD (see above link), which may vary by opinion. But if the question is whether this can be considered CI/CD, we can proceed on the lowest common denominator / bare minimum of popular/accepted definitions and apply those definitions liberally as they relate to the principles of CI/CD.
With that context, let's proceed to determine whether the constituent components are present.
Is Continuous Integration being practiced?
Yes. Continuous Integration is being practiced in this scenario. Continuous integration, in its most basic sense, is making sure that your ongoing work is regularly (continually) integrated (tested).
The whole idea is to combat the consequences of integrating (testing) too infrequently. If you do many many changes and never try to build/test the software, any of those changes may have very well broken the build, but you won't know until the point in time where integration (testing) occurs.
You are regularly integrating your changes and making sure the software still builds. This is unequivocally CI in practice.
But there are no automated tests?!
One may make an objection to the effect of "if you're not running what is traditionally thought of as tests (unit|integration|smoke|etc) as part of your automated process, it's not CI" -- this is a demonstrably false statement.
Even though in this case you mention that your "test" steps would be manual, it's still fair to say that simply building your application would be sufficient to meet the basic definition of a "test" in the sense of continuous integration. Successfully building (e.g. compiling) your code is, in itself IS a test. You are effectively testing "can it build". If your code change breaks the compile/build process, your CI process will tell you so right after committing your code -- that's CI in action.
Just like code changes may break a unit test, they can also break the compilation process -- automating your build tests that your changes did not break the build and is, therefore, a kind of continuous integration, without question.
Sure, your product can be broken by your changes even if it compiles successfully. It may even be the case that those software defects would have been caught by sufficient unit testing. But the same could be said of projects with proper unit tests, even projects with "100% code coverage". We certainly don't consider projects with test gaps as not practicing CI. The size of the test gap doesn't make the distinction between CI and non-CI; it's irrelevant to the definition.
Bottom line: building your software exercises (integrates/tests) your code changes, if even only in a minimally significant degree. Doing this on a continuous basis is a form of continuous integration.
Is Continuous Deployment/Delivery being practiced
Yes. It is plain to see in this scenario that, if you are deploying/delivering your software to whatever its 'production environment' is in an automated fashion then you have the "CD" component to CI/CD, at least in some minimal degree. The fact that your tests may be manual is not consequential.
Similar to the above, reasonable people could disagree on the effectiveness of the implementation depending on the details, but one would not be able to make the case that this practice is non-CD, by definition.
Conclusion: can this practice be considered "CI/CD"?
Yes. Both elements of CI and CD are present in at least a minimum degree. The practices used probably can't reasonably be called non-CI or non-CD. Therefore, it should be concluded this described practice can be considered "CI/CD".
I think it goes without saying that the described CI/CD process has gaps and could benefit from improvement and, with the lack of automated tests and other features, doesn't reap all the possible benefits of a robust CI/CD process could offer. However, this doesn't render the process non-CICD by any means. It's certainly CI/CD in practice; whether it's a particularly good or robust CI/CD practice is a subject of opinion.
does CI/CD concept have any prerequisites step(s)?
No, there are no specific prerequisites (like writing automated software tests, for example) to applying CI/CD concepts. You can apply both CI and CD independently of one another without any prerequisites.
To further illustrate, let's think of an even more minimal project with "CI/CD"...
CD could be as simple as committing to the main branch repository of a GitHub Pages. If that same Pages repo, for example, uses Jekyll, then you have CI, too, as GitHub will build your project automatically in addition to deploying it and inform you of build errors when they occur.
In this basic example, the only thing that was needed to implement "CI/CD" was commit the Jekyll project code to a GitHub Pages repository. No prerequisites.
There's even cases where you can accurately consider a project as having a CI process and the CI process might not even build any software at all! CI could, for example, consist solely of code style checks or other trivial checks like checking for newlines at the end of files. When projects only include these kinds of checks alone, we would still call that check process "CI" and it wouldn't be an inaccurate description of the process.

Why smoke tests are useful with Continuous Integration?

We usually do smoke tests to check critical functionalities whenever we receive a new build. After executing the smoke tests, we are sure to go to next stage (next level of testing). I heard from my colleagues that smoke tests are really useful when your team employs Continuous Integration and DevOps. Smoke tests are always beneficial, but how it will be more beneficial with the combination of CI and DevOps?
Testing is interesting and every time a new challenge for QA which requires higher level of efforts in the final deployment of product. This consist of continuous delivery in continuous integration environment. In this continuous deployment process, requires testing to be followed in parallel in order to keep the process moving.
I've usually heard smoke testing used to refer to manual testing that you run to sanity-check builds. This article defines smoke testing as follows:
Smoke Testing, also known as “Build Verification Testing”, is a type
of software testing that comprises of a non-exhaustive set of tests
that aim at ensuring that the most important functions work. The
results of this testing is used to decide if a build is stable enough
to proceed with further testing.
First, I would certainly hope that people are doing this whenever they check code into the main branch to ensure that their changes didn't break the software in some obvious way. That holds whether you're doing continuous integration or not. (One of my personal pet peeves has always been people who check in code and then leave for the day without checking to make sure that it worked).
Also, keep in mind that in a typical CI cycle nowadays a build will often occur with every checkin to the main branch (or, at a minimum, there will be a nightly automated build; at my current company we have both), so you don't really have time to manually run your entire test suite for every build. One of the main purposes of CI is to have integration (and, as an extension, builds) occur much more frequently than is typical in other kinds of development cycles.
As one final comment: if you're doing continuous integration, I'd strongly encourage you to have some kind of automated testing (e.g. coded UI tests, unit tests, etc.) as part of that. Those can provide basic smoke/sanity testing and regression testing and reduce the burden of having to do all of it manually for every build.

Selenium Testing Architecture

I am trying to optimize the current Automation testing we use for our application. We currently use a combination of Selenium and Cucumber.
Right now the layers we use are:
TEST CASE -> SELENIUM -> Browser.
I have seen recommendations that its better to use TEST CASE -> FRAMEWORK -> SELENIUM -> BROWSER, that way when changes happen in the UI you only need to update the framework and not each test case.
The Question is our scripts are currently broken up into individual steps so when changes to UI happen we only update a script or two, is it better to use this approach with
several scripts that execute for each test case
or go to the framework approach
where the classes, methods, etc. reside in the framework and the test cases just call the methods with parameters for each step?
It depends on:
the life cycle of your testing project, a project with a long life cycle is more worthy to develop a framework for than a short one.
how often you need to update your test cases ( which in turn depends on how often those web pages under test change), a volatile webpage will demand its test scripts to be updated more regularly. Having a framework improves maintainability. (that is, if this framework is well written).
Introduce a framework has the following pros and cons:
pros: easier maintenance, you no longer need to modify your code in multiple test cases, this will save your effort and time. And you get to re-use your framework over and over again for future projects, which will save you time and effort in a long run.
cons: will have development overhead, extra money and effort are required to achieve it. If this project is small and short, the effort and money you spend on introducing a framework may even out-weight its benefits.

Can I use Unit Testing tools for Integration Testing?

I'm preparing to create my first Unit Test, or at least that's how I was thinking of it. After reading up on unit testing this weekend I suspect I'm actually wanting to do Integration Testing. I have a black box component from a 3rd party vendor (e.g. a digital scale API) and I want to create tests to test it's usage in my application. My goal is to determine if a newly released version of said component is working correctly when integrated into my application.
The use of this component is buried deep in my application's code and the methods that utilize it would be very difficult to unit test without extensive refactoring which I can't do at this time. I plan to, eventually.
Considering this fact I was planning to write custom Unit Tests (i.e. no derived from one of my classes methods or properties) to put this 3rd party component through the same operations that my application will require from it. I do suspect that I'm circumventing a significant benefit of Unit Testing by doing it this way, but as I said earlier I can't stop and refactor this particular part of my application at this time.
I'm left wondering if I can still write Unit Tests (using Visual Studio) to test this component or is that going against best practices? From my reading it seems that the Unit Testing tools in Visual Studio are very much designed to do just that - unit test methods and properties of a component.
I'm going in circles in my head, I can't determine if what I want is a Unit Test (of the 3rd party component) or an Integration Test? I'm drawn to Unit Tests because it's a managed system to execute tests, but I don't know if they are appropriate for what I'm trying to do.
Your plan of putting tests around the 3rd party component, to prove that it does what you think it does (what the rest of your system needs it to do) is a good idea. This way when you upgrade the component you can tell quickly if it has changed in ways that mean your system will need to change. This would be an Integration Contract Test between that component and the rest of your system.
Going forward it would behoove you to put that 3rd party component behind an interface upon which the other components of your system depend. Then those other parts can be tested in isolation from the 3rd party component.
I'd refer to Micheal Feathers' Working Effectively with Legacy Code for information on ways to go about adding unit tests to code which is not factored well for unit tests.
Testing the 3rd party component the way you are doing it is certainly not against best practices.
Such a test would, however, be classified as a (sub-)system test, since a) the 3rd party component is tested as an isolated (sub-)system, and, b) your testing goal is to validate the behaviour on API level rather than on testing the lower level implementation aspects.
The test would definitely not be classified as an integration test, because you are simply not testing the component together with your code. That is, you will for example not find out if your component uses the 3rd party component in a way that violates the expectations of the 3rd party component.
That said, I would like to make two points:
The fact that a test is not a unit-test does not make it less valuable. I have encountered situations where I told people that their tests were not unit-tests, and they got angry at me because they thought I wanted to tell them that their tests did not make sense - an unfortunate misunderstanding.
To what category a test belongs is not defined by technicalities like which testing framework you are using. It is rather defined by the goals you want to achieve with the test, for example, which types of errors you want to find.

TDD: refactoring and global regressions

While the refactoring step of test driven development should always involve another full run of tests for the given functionality, what is your approach about preventing possible regressions beyond the functionality itself?
My professional experience makes me want to retest the whole functional module after any code change. Is it what TDD recommends?
Thank you.
While the refactoring step of test driven development should always
involve another full run of tests for the given functionality, what is
your approach about preventing possible regressions beyond the
functionality itself?
When you are working on specific feature it is enough to run tests for the given functionality only. There is no need to do full regression.
My professional experience makes me want to retest the whole
functional module after any code change.
You do not need to do full regression but you can, since Unit tests are small, simple and fast.
Also, there are several tools that are used for "Continuous Testing" in different languages:
in Ruby (e.g Watchr)
in PHP, (e.g. Sismo)
in .NET (e.g. NCrunch)
All these tools are used to run tests automatically on your local machine to get fast feedback.
Only when you are about to finish implementation of the feature it is time to do full run of all your tests.
Running tests on Continuous integration (CI) server is essential. Especially, when you have lots of integration tests.
TDD is just a methodology to write new code or modify old one. Your entire tests base should be ran every time a modification is done to any of the code file (new feature or refactoring). That's how you ensure no regression has taken place. We're talking about automatic testing here (unit-tests, system-tests, acceptance-tests, sometimes performance tests as well)
Continuous integration (CI) will help you achieve that: a CI server (Jenkins, Hudson, TeamCity, CruiseControl...) will have all your tests, and run them automatically when you commit a change to source control. It can also calculate test coverage and indicate where your code is insufficiently tested (note if you do proper TDD, your test coverage should always be 100%).

Resources