Obsolete test case (or test case of removed feature) - visual-sourcesafe

What to do with obsolete test case (or test case of removed feature)?
We are using VSS in our project. Hundreds of test cases are there.
Suppose some features are removed from application in current version but the test cases are still there in VSS.
Should we remove these test cases from VSS as pertaining features are removed from application? Should we keep it in different folder named 'Obsolete feature test case' in 'VSS', so that if require we can re - use it in future?

I suppose you are using a versioning system like Git or Subversion.
So if you have pushed your tests on the server, you can easily remove the obsolete tests now and if you need them in the future, you can get those tests back thanks to your versioning system.
It permits to have a clean repository and a clean structure on your computer without losing some obsoletes code.

We are using VSS.
When any test case is in SVN or VSS it is safe. We can take it from anywhere anytime. But if we keep it on our local machine, there is a chance that we can lose it (might be system crash, might be deletion of file while clean up process.....).
So it looks safe and better to create a folder and keep all the obsolete test case in VSS.

Related

how to stop old pact files being merged with new ones, without having to rely on my colleagues knowing to clean the target/pacts folder?

Pact merges pacts at the file level, this is great for merging pacts from multiple tests, but not so great when you want to modify and re-run a test without cleaning the target/pacts folder.
The default junit run config in intellij doesn't clean the target folder before running the tests; I know I can use maven clean/remove the files manually, but this means anyone else who runs these tests locally needs to know to run them a specific way.
I want to merge pacts from multiple tests so I don't want to turn off merging.
I tried implementing a before method that deletes files from the pact folder if they exist, but it was janky.
I'm considering setting the pact folder to a temporary directory that removes itself after the tests are run, but that might interfere with pushing new pacts to the broker, and I don't want to remove the folder too soon/often and end up with missing pacts. Also it's useful to be able to see the files at the end, so auto-removing them isn't ideal.
Is there a nice way to stop old pacts merging with new ones, without relying on people to just know they need to remove old pact files before running a modified test?
Why is it an actual problem for you? As in, yes, the pact file is temporarily bigger than it should be, but what is the actual impact?
You shouldn't be publishing from your local machine anyway, that is a CI concern (I usually restrict by not providing write credentials to local environments) so if all you need is to be able to rerun a unit test then I wouldn't worry.
Alternatively, if you are all using the same IDE you could create an IDE specific config and check it in to the repo that cleans the dir before any target / test is run.

Why test in continuous integration if you can test on pre-commit and pre-push git hooks?

What is the point of using a Continuous Integration system to test your code if you already have a system like Husky running that allows you to test you code before pre-commit and pre-push?
Pre-commit and pre-push hooks are great for quick operations and tests. Sometimes you can even setup a hook in your IDE that will run quick unit tests every time you save a file. But usually you have multiple suites of tests and unlike unit tests functional, integration and performance tests often take longer time to run, which is not feasible for hooks.
Also, you want to run your tests in the same environment where you build your deliverables, which is usually not your local machine.
Another reason to use CI system is to run post-merge tests to verify that there are no issues introduced by multiple parallel merges.
All-in-all, the more tests you run, the better and a CI system allows you to run both pre-merge tests usually triggered by some sort of pull request hook and post-merge tests. And all of that in a controlled reliable environment.
I'm not really interested about whether it passes in your local environment, where you may have a different version of some dependent library on your environment path. I want to know for sure that anyone's contributions don't break the software when linked against the specific library versions that we ship with.
One reason to test using a Continuous Integration platform like Travis would be to assure developers haven't circumvented their own local development environment's testing git hooks.
CI is not only tests, it's a lot more, but the test stage is of course a very important part of the flow.
As you said in your own answer, local environments could be changed, the tests on the CI could have stricter settings, the environment you test on could be more like the environment that the end-user uses (say, set versions of software or even hardware).
Say for example that you develop a PHP package. The package have support for everything between php 5.6 and 7.2, it should also support multiple types of operating systems and should behave differently if ext/open_ssl is installed or not. A local test suite would rarely have a setup allowing the developer to test each of the possible versions on each of the required platforms, but a test suite set up in a CI pipeline could.
And honestly, it's always a good idea to test one more time, just to be safe! ;)
In certain useful and reasonable workflows, it is acceptable to commit and push broken commits (though not to the master branch). Preventing such workflows with git hooks is annoying.
Rebasing or merging as an example does not run hooks again, even though files are changed.
Hooks are also very difficult to get right. They check a local state which might not be what gets pushed (if certain files are present that are not in git).
CI servers also provide a stable predictable environment. E.g. consider a CI server having Linux and developers using MacOs laptops. The git hooks run on MacOs, which has case insensitive file system, allowing tests to pass even if filenames are wrong.
Hooks also punish diligent developers who run checks manually before committing, because tests are just run again one more time.
Each professional project should have CI. The real question is why any project should maintain annoying slow fragile broken local hooks when you already have CI.
Use hooks only for private toy projects.

How to version products inside monorepo?

I have been educating myself about monorepos as I believe it is a great solution for my team and the current state of our projects. We have multiple web products (Client portal, Internal Portal, API, Core shared code).
Where I am struggling to find the answer that I want to find is versioning.
What is the versioning strategy when all of your projects and products are inside a monorepo?
1 version fits all?
Git sub-modules with independent versioning (kind of breaks the point of having a mono repo)
Other strategy?
And from a CI perspective, when you commit something in project A, should you launch the whole suite of tests in all of the projects to make sure that nothing broke, even though there was no necessarily a change made to a dependency/share module?
What is the versioning strategy when all of your projects and products are inside a monorepo?
I would suggest that one version fits all for the following reasons:
When releasing your products you can tag the entire branch as release-x.x.x for example. If bugs come up you wouldn't need to check "which version was of XXX was YYY using"
It also makes it easier to force that version x.x.x of XXX uses version x.x.x of YYY. In essence, keeping your projects in sync. How you go about this of course depends on what technology your projects are written in.
And from a CI perspective, when you commit something in project A, should you launch the whole suite of tests in all of the projects to make sure that nothing broke, even though there was no necessarily a change made to a dependency/share module?
If the tests don't take particularly long to execute, no harm can come from this. I would definitely recommend this. The more often your tests run the sooner you could uncover time dependent or environment dependent bugs.
If you do not want to run tests all the time for whatever reason, you could query your VCS and write a script which conditionally triggers tests depending on what has changed. This relies heavily on integration between your VCS and your CI server.

Best practice for placement of large test datasets?

I'm dealing with an enormous amount of data (say, video) and most integration tests require at least a decent subset of this data.
These test files (subsets) can range from 200MB to 2GB.
Where would be a good place to put these files? Ideally they would not go directly into our version control system because people shouldn't have to download 5GB+ of test data every time they want to check out the project.
The test data needs to be updated by Jenkins whenever a schema change occurs (we already have this part figured out), so either maven or svn would need to download the latest version if anybody wanted to run the integration tests.
It would be great if it could be on-demand since we never run all the tests at once locally (e.g., if we are running TestX, then download the files required for this test before running).
Does anybody have any suggestion(s) on how to approach this?
Edit -- For the sake of simplicity let's say that the test files are incompressible.
In this case I would setup a file server share, that contains all the test data in a nicely organized way. Then let your test download the necessary test data itself. The advantage is that you can update the test data in the central place without updating the tests themselves. The next time the tests run, the new testdata will be downloaded.
If you need versioning, you would could use a repository manager like Nexus instead of a simple filesystem. If you need audit-ability, I would suggest a repository manager like subversion. However, make sure that you use a separate repo just for your testdata, so you can easily clean out the repo by replacing it with an empty repo that gets only the newest testdata loaded.

Version Control for Hudson Continuous Integration Build Jobs

We have a continuous integration server with over 40 jobs that are constantly changing. I would like to version control continuous integration build jobs in Hudson so we can roll back changes if we have problems.
Is there a Hudson plugin that will do this or other solution that already exists or should I keep the config.xml files in SVN.
Hudson Labs has a really great write up on this, Keeping your configuration and data in Subversion
This is the first bit of the article
We all know that keeping important
files in version control is critical,
as it ensures problematic changes can
be reverted and can serve as a backup
mechanism as well. Code and resources
are often kept in version control, but
it can be easy to forget your
continuous integration (CI) server
itself! If a disk were to die or fall
victim to a misplaced rm -rf, you
could lose all the history and
configuration associated with the jobs
your CI server manages.
It’s pretty simple to create a
repository, but it isn’t obvious which
parts of your $HUDSON_HOME you’ll want
to backup. You’ll also want to have
some automation so new projects get
added to the repository, and deleted
ones get removed. Luckily we have a
great tool to handle this: Hudson!
We have a Hudson job which runs
nightly, performs the appropriate SVN
commands, and checks in
You only seem to be interested in the configuration, which is fine, just ignore or filter out the bits about the data and focus on the configuration.
This is one of the more recent threads about using version control with Hudson's configuration on the Hudson users list.
There are no plugins to do store configuration in an SCM right now (March 2010) though the backup plugin might do something close to what you want, but perhaps with less of a view of 'change' and more of just a snapshot at any given time.
The relatively new Job Config History plugin gets part of the way there - it doesn't actually store the configurations in source control, but it does provide history and auditing of changes to jobs.
You could look into the SCM Sync configuration plugin.
It automatically commits all of your jenkins config changes to svn. that way you can track configuration errors easily.
https://wiki.jenkins-ci.org/display/JENKINS/SCM+Sync+configuration+plugin

Resources