Why smoke tests are useful with Continuous Integration? - continuous-integration

We usually do smoke tests to check critical functionalities whenever we receive a new build. After executing the smoke tests, we are sure to go to next stage (next level of testing). I heard from my colleagues that smoke tests are really useful when your team employs Continuous Integration and DevOps. Smoke tests are always beneficial, but how it will be more beneficial with the combination of CI and DevOps?

Testing is interesting and every time a new challenge for QA which requires higher level of efforts in the final deployment of product. This consist of continuous delivery in continuous integration environment. In this continuous deployment process, requires testing to be followed in parallel in order to keep the process moving.

I've usually heard smoke testing used to refer to manual testing that you run to sanity-check builds. This article defines smoke testing as follows:
Smoke Testing, also known as “Build Verification Testing”, is a type
of software testing that comprises of a non-exhaustive set of tests
that aim at ensuring that the most important functions work. The
results of this testing is used to decide if a build is stable enough
to proceed with further testing.
First, I would certainly hope that people are doing this whenever they check code into the main branch to ensure that their changes didn't break the software in some obvious way. That holds whether you're doing continuous integration or not. (One of my personal pet peeves has always been people who check in code and then leave for the day without checking to make sure that it worked).
Also, keep in mind that in a typical CI cycle nowadays a build will often occur with every checkin to the main branch (or, at a minimum, there will be a nightly automated build; at my current company we have both), so you don't really have time to manually run your entire test suite for every build. One of the main purposes of CI is to have integration (and, as an extension, builds) occur much more frequently than is typical in other kinds of development cycles.
As one final comment: if you're doing continuous integration, I'd strongly encourage you to have some kind of automated testing (e.g. coded UI tests, unit tests, etc.) as part of that. Those can provide basic smoke/sanity testing and regression testing and reduce the burden of having to do all of it manually for every build.

Related

Does implementing CI/CD require prerequisite steps?

I'm trying to understand CI/CD strategy.
Many CI/CD articles mention that it's a automation services of build, test, deploy phase.
I would to know does CI/CD concept have any prerequisites step(s)?
For example, if I make a simple tool that automatically builds and deploys, but test step is manual - can this be considered CI/CD?
There's a minor point of minutia that should be mentioned first: the "D" in "CI/CD" can either mean "Delivery" or "Deployment". For the sake of this question, we'll accept the two terms as relatively interchangeable -- but be aware that others may apply a more narrow definition, which may be slightly different depending on which "D" you mean, specifically. For additional context, see: Continuous Integration vs. Continuous Delivery vs. Continuous Deployment
For example, if I make a simple tool that automatically builds and deploys, but test step is manual - can this be considered CI/CD?
Let's break this down. Beforehand, let's establish what can be considered "CI/CD". Easy enough: if your (automated) process is practicing both CI (continuous integration) and CD (continuous deployment), then we can consider the solution as being some form of "CI/CD".
We'll need some definitions for CI and CD (see above link), which may vary by opinion. But if the question is whether this can be considered CI/CD, we can proceed on the lowest common denominator / bare minimum of popular/accepted definitions and apply those definitions liberally as they relate to the principles of CI/CD.
With that context, let's proceed to determine whether the constituent components are present.
Is Continuous Integration being practiced?
Yes. Continuous Integration is being practiced in this scenario. Continuous integration, in its most basic sense, is making sure that your ongoing work is regularly (continually) integrated (tested).
The whole idea is to combat the consequences of integrating (testing) too infrequently. If you do many many changes and never try to build/test the software, any of those changes may have very well broken the build, but you won't know until the point in time where integration (testing) occurs.
You are regularly integrating your changes and making sure the software still builds. This is unequivocally CI in practice.
But there are no automated tests?!
One may make an objection to the effect of "if you're not running what is traditionally thought of as tests (unit|integration|smoke|etc) as part of your automated process, it's not CI" -- this is a demonstrably false statement.
Even though in this case you mention that your "test" steps would be manual, it's still fair to say that simply building your application would be sufficient to meet the basic definition of a "test" in the sense of continuous integration. Successfully building (e.g. compiling) your code is, in itself IS a test. You are effectively testing "can it build". If your code change breaks the compile/build process, your CI process will tell you so right after committing your code -- that's CI in action.
Just like code changes may break a unit test, they can also break the compilation process -- automating your build tests that your changes did not break the build and is, therefore, a kind of continuous integration, without question.
Sure, your product can be broken by your changes even if it compiles successfully. It may even be the case that those software defects would have been caught by sufficient unit testing. But the same could be said of projects with proper unit tests, even projects with "100% code coverage". We certainly don't consider projects with test gaps as not practicing CI. The size of the test gap doesn't make the distinction between CI and non-CI; it's irrelevant to the definition.
Bottom line: building your software exercises (integrates/tests) your code changes, if even only in a minimally significant degree. Doing this on a continuous basis is a form of continuous integration.
Is Continuous Deployment/Delivery being practiced
Yes. It is plain to see in this scenario that, if you are deploying/delivering your software to whatever its 'production environment' is in an automated fashion then you have the "CD" component to CI/CD, at least in some minimal degree. The fact that your tests may be manual is not consequential.
Similar to the above, reasonable people could disagree on the effectiveness of the implementation depending on the details, but one would not be able to make the case that this practice is non-CD, by definition.
Conclusion: can this practice be considered "CI/CD"?
Yes. Both elements of CI and CD are present in at least a minimum degree. The practices used probably can't reasonably be called non-CI or non-CD. Therefore, it should be concluded this described practice can be considered "CI/CD".
I think it goes without saying that the described CI/CD process has gaps and could benefit from improvement and, with the lack of automated tests and other features, doesn't reap all the possible benefits of a robust CI/CD process could offer. However, this doesn't render the process non-CICD by any means. It's certainly CI/CD in practice; whether it's a particularly good or robust CI/CD practice is a subject of opinion.
does CI/CD concept have any prerequisites step(s)?
No, there are no specific prerequisites (like writing automated software tests, for example) to applying CI/CD concepts. You can apply both CI and CD independently of one another without any prerequisites.
To further illustrate, let's think of an even more minimal project with "CI/CD"...
CD could be as simple as committing to the main branch repository of a GitHub Pages. If that same Pages repo, for example, uses Jekyll, then you have CI, too, as GitHub will build your project automatically in addition to deploying it and inform you of build errors when they occur.
In this basic example, the only thing that was needed to implement "CI/CD" was commit the Jekyll project code to a GitHub Pages repository. No prerequisites.
There's even cases where you can accurately consider a project as having a CI process and the CI process might not even build any software at all! CI could, for example, consist solely of code style checks or other trivial checks like checking for newlines at the end of files. When projects only include these kinds of checks alone, we would still call that check process "CI" and it wouldn't be an inaccurate description of the process.

what need to be done between continuous integration and continuous delivery

According to my understanding, continuous integration means whenever a developer checkin the code to a branch, the code is automatically built, unit test (or other basic test) and then merged to master branch. one tool to do that is Jenkins.
continuous delivery means the code is always READY to be or CAN be deployed, though it may not be deployed.
so what else should be done to move the step from continuous integration to continuous delivery? package the code after more detailed tests like integration/performance/stress tests, tests in difference OS, in different stages (test, production),etc?
There is a long and a short answer. The short one is: automate all the steps of packaging and deploying to production and create safety net that automatically checks that the software is ready for release.
The first includes automation database migration taking into consideration zero time deployment (if needed), packaging the binaries, updating configuration files, gradually deploying to different data centers.
The second includes creating test suites for functional and non functional tests. Such as performance, load testing, security penetration, licensing etc.

Continuous Integration vs. Continuous Delivery vs. Continuous Deployment

What is the difference between these three terms? My university provides the following definitions:
Continuous Integration basically just means that the developer's working copies are synchronized with a shared mainline several times a day.
Continuous Delivery is described as the logical evolution of continuous integration: Always be able to put a product into production!
Continuous Deployment is described as the logical next step after continuous delivery: Automatically deploy the product into production whenever it passes QA!
They also provide a warning: Sometimes the term "Continuous Deployment" is also used if you are able to continuously deploy to the test system.
All this leaves me confused. Any explaination that is a little more detailed (or comes with an example) is appreciated!
Continuous Integration
I Agree with your university's definition. Continuous Integration is a strategy for how a developer can integrate code to the mainline continuously - as opposed to frequently.
You might claim that it's merely a branching strategy in your version control system.
It has to do with the size of the tasks you assign to a developer; If a task is estimated to take 4-5 man-days then the developer will have no incitement to deliver anything for the next 4-5 days, because he's not done with anything - yet.
So size matters:
small task = continuous integration
big task = frequent integration
The ideal task size is not bigger than a day's work. This way a developer will naturally have at least one integration per day.
Continuous Delivery
There are basically three schools within Continuous Delivery:
Continuous Delivery is a natural extension of Continuous Integration
This school, looks at the Addison-Wesley "Martin Fowler" signature series and makes the assumption that since the 2007 release was called "Continuous Integration" and the one that followed in 2011 was called "Continuous Delivery" they are probably volume 1+2 of the same conceptual idea that has to do with continuous something.
Continuous Delivery has to do with Agile Software Development
This school takes off-set in the idea that Continuous Delivery is all about being able to support the principles in the agile movement, not just as a conceptual idea or a letter of intent but for real - in real life.
Taking offset in the first principle in the Agile Manifesto where the term "continuous delivery" is actually used for the first time:
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
This school claims that "Continuous Delivery" is a paradigm that embraces everything required to implement an automated verification of your "definition of done".
This school accepts that "Continuous Delivery" and the buzz word or megatrend "DevOps" are flip sides of the same coin, in the sense that they both try to embrace or encapsulate this new paradigm or approach and not just a technique.
Continuous Delivery is a synonym to Continuous Deployment
The third school advocates that Continuous Deployment and Continuous Delivery can be used interchangeably to mean the same thing.
When something is ready in the hands of the developers, it's immediately delivered to the end-users, which in most cases will mean that it should be deployed to the production environment. Hence "Deploy" and "Deliver" means the same.
Which school to join
Your university clearly joined the first school and claims that we're referring to volume 1+2 of the same publication series. My opinion is that this is a misuse of the term Continuous Delivery.
I personally advocate for the understanding that Continuous Delivery is related to implementing a real-life support for the ideas and concepts stated by the agile movement. So I joined the school that says the term embraces a whole paradigm - like "DevOps".
The school that uses delivery as a synonym to deploy is mostly advocated by tool vendors who create deployment consoles, trying to get a bit of hype from the more widespread use of the term Continuous Delivery.
Continuous Deployment
The focus on Continuous Deployment is mostly relevant in domains where the end user's access to software updates relies on the update of some centralized source for this information and where this centralized source is not always easy to update because it's monolithic or has (too) high coherence by nature (web, SOA, Databases etc.).
For a lot of domains that produces software where there is no centralized source of information (devices, consumer products, client installations etc.) or where the centralized source for information is easy to update (app stores artifact management systems, Open Source repositories etc.), there is almost no hype about the term Continuous Deployment at all. They just deploy; it's not a big thing - it's not a pain that requires special focus.
The fact that Continuous Deployment is not something that is generically interesting to everyone is also an argument that the school that claims that "delivery" and "deploy" are synonyms got it all wrong. Because Continuous Delivery actually makes perfectly good sense to everyone - even if you are doing embedded software in devices or releasing Open Source plugins for a framework.
Your university's definition that Continuous Deployment is a natural next step of Continuous Delivery implicitly assumes that every delivery that is QA'ed should go become available to the end-users immediately, is closer to the definition that my tribe use to describe the term "Continous Release", which, in turn, is another concept that doesn't generically makes sense to everyone either.
A release can be a very strategic or political thing and there is no reason to assume that everybody would want to do this all the time (unless they are an online bookstore a streaming service type of company). Nevertheless, companies that don't blindly release everything all the time may have any number of reasons why they would want to be masters of deployment anyway, so they too do Continuous Deployment. Not of release to production, but of release-candidates to production-like environments.
Again I believe your university got it wrong. They are mistaking "Continuous Deployment" for "Continuous Release".
Continuous deployment is simply the discipline of continuously being able to move the result of a development process to a production-like environment where functional testing can be executed in full scale.
The Continuous Delivery Storyline
In the picture it all comes alive:
The Continuous Integration process is the first two actions in the state-transition diagram. Which - if successful - kicks off the Continuous Delivery pipeline that implements the definition of done. Deployment is just one of the many actions that will have to be done continuously in this pipeline. Ideally, the process is automated from the point where the developer commits to the VCS to the point where the pipeline has confirmed that we have a valid release candidate.
Neither the question nor the answers really fit my simple way of thinking about it. I'm a consultant and have synchronized these definitions with a number of Dev teams and DevOps people, but am curious about how it matches with the industry at large:
Basically I think of the agile practice of continuous delivery like a continuum:
Not continuous (everything manual) 0% ----> 100% Continuous Delivery of Value (everything automated)
Steps towards continuous delivery:
Zero. Nothing is automated when devs check in code... You're lucky if they have compiled, run, or performed any testing prior to check-in.
Continuous Build: automated build on every check-in, which is the first step, but does nothing to prove functional integration of new code.
Continuous Integration (CI): automated build and execution of at least unit tests to prove integration of new code with existing code, but preferably integration tests (end-to-end).
Continuous Deployment (CD): automated deployment when code passes CI at least into a test environment, preferably into higher environments when quality is proven either via CI or by marking a lower environment as PASSED after manual testing. I.E., testing may be manual in some cases, but promoting to next environment is automatic.
Continuous Delivery: automated publication and release of the system into production. This is CD into production plus any other configuration changes like setup for A/B testing, notification to users of new features, notifying support of new version and change notes, etc.
EDIT: I would like to point out that there's a difference between the concept of "continuous delivery" as referenced in the first principle of the Agile Manifesto (http://agilemanifesto.org/principles.html) and the practice of Continuous Delivery, as seems to be referenced by the context of the question. The principle of continuous delivery is that of striving to reduce the Inventory waste as described in Lean thinking (http://www.miconleansixsigma.com/8-wastes.html). The practice of Continuous Delivery (CD) by agile teams has emerged in the many years since the Agile Manifesto was written in 2001. This agile practice directly addresses the principle, although they are different things and apparently easily confused.
I think amazon definition is straight and simple to understand.
"Continuous delivery is a software development methodology where the release process is automated. Every software change is automatically built, tested, and deployed to production. Before the final push to production, a person, an automated test, or a business rule decides when the final push should occur. Although every successful software change can be immediately released to production with continuous delivery, not all changes need to be released right away.
Continuous integration is a software development practice where members of a team use a version control system and integrate their work frequently to the same location, such as a master branch. Each change is built and verified by tests and other verifications in order to detect any integration errors as quickly as possible. Continuous integration is focused on automatically building and testing code, as compared to continuous delivery, which automates the entire software release process up to production."
Please check out http://docs.aws.amazon.com/codepipeline/latest/userguide/concepts.html
Atlassian posted a good explanation about Continuous integration vs. continuous delivery vs. continuous deployment.
In a nutshell:
Continuous Integration - is
an automation to build and test application whenever new commits are pushed into the branch.
Continuous Delivery - is Continuous Integration + Deploy application to production by "clicking on a button" (Release to customers is often, but on demand).
Continuous Deployment - is
Continuous Delivery but without human intervention (Release to customers is on-going).
Continuous Integration basically just means that the developer's working copies are synchronized with a shared mainline several times a day.
Or more than several times per day. As often as any given discrete task is completed, basically. Consider for example a team of developers working on a single business application. In many environments, the following may happen:
One or two developers keep local changes for a few days because "it's not ready yet".
One or two developers create branches in the source control so they can work on their feature(s) "without being bothered by other people's changes".
These can lead to problems. Poor code/task organization leads to branching, branching leads to merging, merging... leads to suffering. Continuous integration as a practice addresses this by encouraging everybody to work from the same shared source. Individual work items should be discrete enough to be completed in a short amount of time (hours at most).
Basically the general idea is that integrating a small change in a small amount of work. Integrating a large change is a disproportionately large amount of work. The aggregate of integration work is smaller if done in constant small steps. This allows developers to spend more time working on business-visible features instead of development process overhead.
Continuous Delivery is described as the logical evolution of continuous integration: Always be able to put a product into production!
This follows the same idea of discrete, well defined work items. If there's a single master codebase which is only ever adjusted in small increments by complete, tested, known working features then that codebase is always stable. Automated testing is key here to be able to prove that stability at the push of a button.
The less stabilization work that needs to be done (which, again, is development process overhead and should be eliminated), the more often that codebase can be pushed to any given environment. In a lot of companies a deployment can be a pretty grueling process. Even a week-long all-hands-on-deck operation. This is expensive and produces no business value. By employing good work item definitions, effective automated testing, and continuous integration a team can be in a position to automate the codebase's delivery to any given environment.
Continuous Deployment is described as the logical next step after continuous delivery: Automatically deploy the product into production whenever it passes QA!
You'll rarely see this happen in a business environment, and it's quite a joy when it's encountered. If the codebase can be automatically tested and automatically deployed to any given environment then, well, production is an environment like any other. So if the team has built up to this point then there's a potential for significant value to the business by always being able to deploy updates to production.
Defect fixes are sent to customers faster, new features reach the market faster, new ideas are tested against the market in smaller increments to allow for redirection of priorities, etc.
For example, let's say a company has a big idea for a new feature in their software-based product or service. They've done some research, they know the market, and they believe this idea will result in a strong new line of revenue. Now consider two options for delivering that feature:
Spend months developing the whole thing in a one-off branch. Spend weeks integrating it back into the main codebase. Spend days testing it. Spend a day deploying it. And only then start tracking actual revenue in the production system.
Implement small parts of the feature, one at a time. Each week release a new piece of it. Each week get more data on actual revenue.
In the first scenario, if the feature doesn't have the desired market effect then a lot of money is wasted on something customers don't actually want. In the second scenario the fact that customers don't want it is determined much, much earlier and the rest of the work is de-prioritized.
Ultimately these "continuous things" are all about removing development process overhead. If a company's line of revenue is a particular service offering then ideally all of their costs should go into that offering. Development process overhead (merging code, re-testing the same features after a merge, manual deployment tasks, etc.) don't actually contribute to the value of the service, so these concepts seek to remove those costs from the process.
One graph can replace many words:
Enjoy! :-)
# I have updated the correct image...
Difference between Continuous Integration, Continuous delivery and continuous deployment
I think we're over analyzing and maybe complicating a bit the "continuous" suite of words. In this context continuous means automation. For the other words attached to "continuous" use the English language as your translation guide and please don't try to complicate things!
In "continuous build" we automatically build (write/compile/link/etc) our application into something that's executable for a specific platform/container/runtime/etc.
"Continuous integration" means that your new functionality tests and performs as intended when interacting with another entity. Obviously, before integration takes place, the build must happen and thorough testing would also be used to validate the integration. So, in "continuous integration" one uses automation to add value to an existing bucket of functionality in a way that doesn't negatively disrupt the existing functionality but rather integrates nicely with it, adding a perceived value to the whole.
Integration implies, by its mere English definition, that things jive harmoniously so in code-talk my add compiles, links, tests and runs perfectly within the whole. You wouldn't call something integrated if it failed the end product, would you?!
In our context "Continuous deployment" is synonymous with "continuos delivery" since at the end of the day we've provided functionality to our customers. However, by overanalyzing this, I could argue that deploy is a subset of delivery because deploying something doesn't necessarily mean that we delivered. We deployed the code but because we haven't effectively communicated to our stakeholders, we failed to deliver from a business perspective! We deployed the troops but we haven't delivered the promised water and food to the nearby town.
What if I were to add the "continuous transition" term, would it have its own merit? After all, maybe it's better suited to describe the movement of code through environments since it has the connotation of "from/to" more so than deployment or delivery which could imply one location only, in perpetuity! This is what we get if we don't apply common sense.
In conclusion, this is simple stuff to describe (doing it is a bit more ...complicated!), just use common sense, the English language and you'll be fine.
Continuous Integration: The practice of merging the development work with the main branch constantly so that the code has been tested as often as possible to catch issues early.
Continuous Delivery: Continuous delivery of code to an environment once the code is ready to ship. This could be staging or production. The idea is the product is delivered to a user base, which can be QA's or customers for review and inspection.
Unit test during the Continuous Integration phase can not catch all the bugs and business logic, particularly design issues that is why we need QA, or staging environment for testing.
Continuous Deployment: The deployment or release of code as soon as it's ready. Continuous Deployment requires Continuous Integration and Continuous Delivery otherwise the code quality won't be guarantee in a release.
Continuous Deployment ~~ Continuous Integration + Continuous Delivery
Continuous Integration
Automated(building of check ins + unit test)
Continuous Delivery
Continuous Integration
Automated(deployment to test environment + load testing + integration test)
Manual(deployment to production)
Continuous Deployment
Continuous Delivery but automated(deployment to production)
CI/CD is a journey. Not a destination.
These stages are suggestions. You can adapt the stages based on your
business need. Some stages can be repeated for multiple types of
testing, security, and performance. Depending on the complexity of
your project and the structure of your teams, some stages can be
repeated several times at different levels. For example, the end
product of one team can become a dependency in the project of the next
team. This means that the first team’s end product is subsequently
staged as an artifact in the next team’s project.
Footnote :
Practicing Continuous
Integration and Continuous
Delivery on AWS
Source: https://thenucleargeeks.com/2020/01/21/continuous-integration-vs-continuous-delivery-vs-continuous-deployment/
What is Continuous Integration
Continuous Integration is a process or a development practice of automated build and automated test i.e. A developer is required to commit his code multiple times into a shared repository where each integration is verified by automated build and test.
If the build fails/success it is notified to a developer and then he can take relevant actions.
What is Continuous Delivery
Continuous Delivery is the practise where we keep our code deployable at any point which has passed all the test and has all the required configuration to push the code to production but hasn’t been deployed yet.
What is Continuous Deployment
With the help of CI we have created s build for our application and is ready to push to production. In this step our build is ready and with CD we can deploy our application directly to QA environment and if everything goes well we can deploy the same build to production.
So basically, Continuous deployment is one step further than continuous delivery. With this practice, every change which passes all stages of your production pipeline is released to your customers.
Continuous Deployment is a combination of Configuration Management and Containerization.
Configuration Management: CM is all about maintaining the configuration of server which will be compatible to application requirement.
Containerization: Containerization is a set of tolls which will maintain consistency across the environment.
Img source: https://www.atlassian.com/
From what I've learned with Alex Cowan in the course Continuous Delivery & DevOps, CI and CD is part of a product pipeline that consists in the time it goes from an Observations to a Released Product.
From Observations to Designs the goal is to get high quality testable ideas. This part of the process is considered Continuous Design.
What happens after, when we go from the Code onwards, it's considered a Continuous Delivery capability whose aim is to execute the ideas and release to the customer very fast (you can read Jez Humble's book Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation for more details). The following pipeline explains which steps Continuous Integration (CI) and Continuous Delivery (CD) consist of.
Continuous Integration, as Mattias Petter Johansson explains,
is when a software team has habit of doing multiple merges per day and
they have an automated verification system in place to check those
merges for problems.
(you can watch the following two videos for a more pratical overview using CircleCI - Getting started with CircleCI - Continuous Integration P2 and Running CircleCI on Pull Request).
One can specify the CI/CD pipeline as following, that goes from New Code to a released Product.
The first three steps have to do with Tests, extending the boundary of what's being tested.
Continuous Deployment, on the other hand, is to handle the Deployment automatically. So, any code commit that passes the automated testing phase is automatically released into the production.
Note: This isn't necessarily what your pipelines should look like, yet they can serve as reference.
DevOps is a combination of 3C's - continuous, communication, collaboration and this lead to prime focus in various industries.
In an IoT connected devices world, multiple scrum features like product owner, web, mobile and QA working in an agile manner in a scrum of scrum cycle to deliver a product to end customer.
Continuous integration: Multiple scrum feature working simultanrouly in multiple endpoints
Continuous delivery: With integration and deployment, delivery of product to multiple customers to be handled at the same time.
Continuous deployment: multiple products deployed to multiple customers at multiple platform.
Watch this to know how DevOps enabling IoT connected world: https://youtu.be/nAfZt2t4HqA
Continuous Integration : is the practice where developers merge the changes to the code base to the main branch as often as possible. These changes are validated by creating a build and then running automated tests against the build. If these tests don’t pass, the changes aren’t merged, and developers avoid integration challenges that can happen.
Continuous Delivery : is an extension of CI since it enables automation to deploy all the code changes to an environment (dev, qa, stage, prod, etc) after the changes have been merged. The artifact may be built as part of CI or as part of this process since the source of truth (your repository) is reliable given your CI process. In simple terms, this means that there is an automated release process on top of the automated testing process and that developers can deploy their changes at any time by simply clicking a button or at the completion of CI.
Continuous Deployment : takes the process one step further than continuous delivery. Here, all changes that pass the verification steps at each stage in the pipeline are released to production. This process is completely automated and only a failed verification step will prevent pushing the changes to production.
lets keep it short :
CI:
A software development practice where members of a team integrate their work at least daily. Each integration is verified by automated build (include tests)to detect error as quick as possible.
CD:
CD Builds on CI, where you build software in such a way that the software can be released to production at any time.

TDD: refactoring and global regressions

While the refactoring step of test driven development should always involve another full run of tests for the given functionality, what is your approach about preventing possible regressions beyond the functionality itself?
My professional experience makes me want to retest the whole functional module after any code change. Is it what TDD recommends?
Thank you.
While the refactoring step of test driven development should always
involve another full run of tests for the given functionality, what is
your approach about preventing possible regressions beyond the
functionality itself?
When you are working on specific feature it is enough to run tests for the given functionality only. There is no need to do full regression.
My professional experience makes me want to retest the whole
functional module after any code change.
You do not need to do full regression but you can, since Unit tests are small, simple and fast.
Also, there are several tools that are used for "Continuous Testing" in different languages:
in Ruby (e.g Watchr)
in PHP, (e.g. Sismo)
in .NET (e.g. NCrunch)
All these tools are used to run tests automatically on your local machine to get fast feedback.
Only when you are about to finish implementation of the feature it is time to do full run of all your tests.
Running tests on Continuous integration (CI) server is essential. Especially, when you have lots of integration tests.
TDD is just a methodology to write new code or modify old one. Your entire tests base should be ran every time a modification is done to any of the code file (new feature or refactoring). That's how you ensure no regression has taken place. We're talking about automatic testing here (unit-tests, system-tests, acceptance-tests, sometimes performance tests as well)
Continuous integration (CI) will help you achieve that: a CI server (Jenkins, Hudson, TeamCity, CruiseControl...) will have all your tests, and run them automatically when you commit a change to source control. It can also calculate test coverage and indicate where your code is insufficiently tested (note if you do proper TDD, your test coverage should always be 100%).

Are code freezes still relevant when using a continuous integration build setup?

I've used a Continuous Integration server in the past with great success, and hadn't had the need to ever perform a code freeze on the source control system.
However, lately it seems that everywhere I look, most shops are using the concept of code freezes when preparing for a release, or even a new test version of their product. This idea runs even in my current project.
When you check-in early and often, and use unit tests, integration tests, acceptance tests, etc., are code freezes still needed?
Continuous integration is a "build" but it's part of the programming part of the development cycle. Just like the "tests" in TDD are part of the programming part of the development cycle.
There will still be builds and testing as part of the overall development cycle.
The point of continuous integration and tests is to shorten the feedback loops and give programmers more visibility. Ultimately, this does mean less problems in testing and builds, but it doesn't mean you no longer do the original parts of your development cycle - they are just more effective and can be raised to a higher level, since more tivial problems are being caught earlier in the development cycle.
So you will still have to have a code freeze (or at least a branch) in order to ensure the baseline for what you are shipping is as expected. Just because someone can implement something with a high degree of confidence does not mean it goes into your release without passing through the same final cycles, and the code freeze is an important part of that.
With CI, your code freezes can be very short, since your final build, testing and release may be very reliable, and code freeze may not even exist on small projects, since there is no need for a branch - you release and go right back into development on the next set of features very quickly.
I'd also like to add that CI and TDD allow the final build and testing phase to revert back closer to the traditional waterfall (do all dev, do all testing, then release), as opposed to the more continual QA which has been done on projects with weekly or monthly builds. Your testers can use the CI builds to give early feedback, but it's effectively a different sort of feedback than in the final testing, where you are looking for stability and reliability as opposed to functionality (which obviously was missed in the unit "tests" which the developers had built).
Code freezes are important, because continues integration does not replace runtime regression testing.
Having an application build and pass unit testing is only a small part of the challenge, ideally, when you freeze code for a release, you are signing off on two things:
This code has been fully regressioned, and is defect free
This code is EXACTLY the code that should be in production (for SOX compliance).
If your using a modern SCM, just fork the code at that point and start work on the next release in a branch, and do a merge when the project is deployed. (Of course, place a label so you can rollback that point if you need to apply a breaking patch).
Once code is in "release mode", it should not be touched.
Our typical process:
Development
||
\/
QAT
||
\/
UAT => Freeze until deploy date => Deploy => Merge and repeat
\ /
\- New Branch for future dev -------/
Of course, we usually have many parallel branches during development, that merge back up into the release stream before UAT.
The code freeze has more to do with QA than it has to do with Dev. The code freeze is the point where QA has said: "Enough. We only have bandwidth to fully test the new features added in so far." That doesn't mean dev doesn't have the bandwidth to add more features, it's just that QA needs time with a quiescent code base to ensure that everything works together.
If you're all in continuous integration mode (QA included) this could be just a freeze of a very short time while QA puts the final seal of approval on the whole package just before it goes out the door.
It all depends on how tightly your QA and regression testing are integrated into the dev cycle.
I'd second the votes already mentioned about SCM branching and allowing dev to continue on a different code branch than what QA is testing. It all goes back to the same thing. QA and regression testing need a quiescent code base for a period of time prior to release.
I think that code freezes are important because each new feature is a potential new source of bugs. Sure regression tests are great and help address this issue. But code freezes allow the developers to focus on fixing currently outstanding bugs and get the current feature set into a release worthy state.
At best, if I really wanted to develop new code during a code freeze, I would fork the frozen tree, do my work there, then after the freeze, merge the forked tree back in.
I'm going to sound like one of the context-driven people but the answer is "it depends".
Code Freeze is a strategy to cope with a problem. If you don't have the problem it is good at addressing, then no, it isn't needed. If you have another technique for addressing the problem, then no, it isn't needed.
Code Freeze is one technique for reducing risk. The advantages if brings are stability and simplicity. The disadvantage it brings are
Another technique is to use Branching, such as with "Feature Branches". The disadvantage of Branching is cost of dealing with the branches, of merging changes.
The technique you're describing for reducing risk is Automated Testing to give fast feedback. The trade-off here is increased velocity for some increased risk (you will miss some bugs).
Of these approaches I'm with you, I prefer the Automated Testing. But there are some situations, such as very high cost of failure, where a Code Freeze does provide a lot of value.

Resources