ClueMapper for multiple project tracking - project-management

I currently have several projects in separate SVN repositories and across multiple Trac installations. I'd like to combine them all into one project management/issue tracking setup. From the looks of it, Trac doesn't handle multiple projects well. I considered using Redmine after reading this question but I'd rather stay with Trac if I can.
Has anyone used ClueMapper? Is it good? Is it actively developed? How different is it from Trac? How hard is it to set up?
If you have any other suggestions for projects

Guess no one uses ClueMapper. I wound up going with Redmine and it's way better than Trac. I highly recommend it.

Related

Best Practice for Having a Base Project and Multiple Similar Sub-projects

I have been writing an E-shop project for a customer and now I have signed a new similar contract with another customer. I was wondering what would be the best practice to continue the first project while staring the second so that the reusability is at maximum?
One way would be to change the first project to read all menu items, slider pictures, ... from the database so that I can deliver the same project to both customers with different databases. The benefit of this approach is that I have to manage only one project, but it leads me to gradually write a CMS, which is a time-consuming task.
The other solution would be to use Git. For example, I would fork the base project into two different projects. If the functionality I am writing is the base one, then I would push it into the base project; otherwise, I push it into the appropriate forked project.
Which one is a better approach in your opinion? Or you guys have any better idea?
Cheers,
Habib
There are a few things that need to be considered.
First of all, This project as you said has the capability to be sold more. So, you must think about how much is possible to make it dynamic via Configuration files, Hooks & Plugins to make the modification to the functionalities of the project through that. I know you have considered this already.
Second, Using a Core Repository and different forks for customization. (It's a great idea but needs proper discipline, workflow and manpower to make sure everything is fine-tuned and works properly )
It's highly recommended to make your application cloud-native and provide proper UAT/QAT Environment for test before launching on the production, And also implementing Test cases to be checked within the Git and CI/CD pipelines in order to prevent issues in the merge process.
I'm not certain about what you want, but if you want to develop an enterprise project that contains many features such as wallet, tracking, payment,... I think you can implement each service as a microservice and integrate all of them.
About git, I think it's better just for handling the source code and you had better use git module for handling microservice and just using branches for developing process
I have finally found some solutions that I would like to share with you guys. Let's divide differences into 2 big categories of data differences and code differences:
Differences in data
If the database in each project is different (e.g., the product has some features in one project and some other features in another project), then the best solution is to use NoSQLs such as MongoDB. In the first place, NoSQLs are designated to support databases that don't have well-defined data structures, and you don't know what features you may add to each entity at present or in the future. It completely applies to my scenario that each shop may have a different data structure. However, since my project is based on Laravel and it does not have built-in support for MongoDB, I have decided to design some key-value tables that haven't been so bad so far.
Differences in the code
Regarding differences in the code, I would definitely suggest branches in Git and other functionalities provided by Git repositories such as Gitlab repository mirroring. Each feature has a different branch in my code, and I can provide each customer with different functionalities by merging those branches I want to deliver to the customer.
All in all, you may take as much business logic as you can into the database since changing it in the future is more straightforward. On the other hand, you'd better keep themes in the code because every customer likes a different theme, and changing them in the code is easier than taking them to the database.

Least-impact solution to binary references in VCS

We are using TeamCity 2017.1 and have been using it for years with great joy. A long time ago, someone decided that all third-party binaires should be put into Subversion (our VCS of choice).
This has worked fine, but over time this repos has grown quite large, and combined with our being better and better at using TeamCity, we now have dozens of build configurations which all uses third-party binaries.
Our third-party folder is called Department and is around 2.6 GB in size. As such this is not so bad, but remember that this folder is used by pretty much every single project on the build server!
Now, I will agree with everyone that says that we should use Nugets, network shares etc., and that would work great with new projects. However, we have a lot of history and we cannot begin to change every single solution and branch.
A co-worker came up with the idea, that IF we made a single build project that in reality did nothing but keep a single folder updated with our Department stuff. Then we just need to find a way to reference this, without have to change all our projects and solutions.
My initial though is using Snapshot dependencies and then create a symbolic link as the first build step and remove it as the last, in order to achieve the same relative levels.
But is there a better way? What do other people do?
And keep in mind, that replacing with nugets or something else is not an option.
Let me follow the idea of your colleague and improve it. There would be a build configuration that monitors the Subversion repository and copies packages to a network share. That network share will be used by development teams as nuget repository. Projects that will convert their dependencies from Binary reference to nuget reference will enjoy faster build. When all the teams will use nuget repositories you may kill that Subversion.

What are the current downsides of gradle?

What are the current downsides of Gradle? I have been researching different build tools but I haven't seen anything that seems to point out problems with Gradle as of October 2014. I have seen things a bit dated saying that Gradle users are on the bleeding edge. Is that still true or has Gradle reached decent point of maturity? (As far as I know, in terms of ide integration it might be more mature than others). Searching "why not use gradle" doesn't really help and "problems with gradle" shows people getting help (a plus). Most of what I have read were build tool comparisons and newer ones didn't list any flaws of Gradle.
Having not really used gradle except with libgdx projects I can't confirm issues presented in old comparisons still exist, but it seems like they don't.
The one thing I have seen that might be a problem is that it is "slow". If slowness is really an issue please explain how slow and what are the impacts.
Another somewhat reasonable downside is that people need to learn it to use it. To a person knowing how to use 0 tools this isn't really a problem and for others, it seems that Gradle is well documented and easy enough to learn.
I understand there are complications with switching and making sure the build still fully works so I am not asking what are the downsides of switching to gradle specifically but more generally why not use it now in a new project?
Looking for non opinionated reasons/problems.
I am currently using Gradle as part of Android Studio, and its working decently. For me I had a HUGE headache getting started because the default settings of Gradle requires internet connection for the initial build (I assume it was for updates) and it fails to build if it can't make communication with the internet the first time. (so there are some Firewall issue you might need to work around) So the only 'Con' I have to add is that Firewalls can be a problem with initial set up of Gradle.

What is the most notable difference between Jenkins and Hudson from a user perpective?

It is around 10 months now that Jenkins split off from Hudson.
When looking at the project homepages I am wondering what the differences between Hudson and Jenkins in the meantime really are. From the changelog I do not realy learn much. There are a bunch of changes and the major difference seems to be that Jenkins releases more often with less changes and Hudson less frequently, but then with more changes in a release.
Are there any notable differences yet?
So are there things that make me as a developer needing a CI system more productive rather with the one or the other?
Is one of them more stable than the other?
Is there any difference yet that has nothing to do with politics around Oracle?
What is the most notable difference from your point of view?
One notable difference is that a big number of plugins moved to Jenkins. While you would still be able to use the old versions with Hudson, the newer versions depend on Jenkins already. Also new plugins are mostly created with dependencies on quite recent Jenkins versions, so you probably won't be able to use them without hassle on Hudson.
This will probably differ from plugin to plugin, some might be more compatible with Hudson than others, while still others provide versions for both tools. But if something does not work well with a plugin you will receive help easier if you use Jenkins.
EDIT: Here is an interesting link I found, not only providing some solid numbers on the different paths Jenkins and Hudson have taken, but also addressing the (non-)issue of IP that was mentioned in the other post here...
check out the work being done on cleaning up the code and the IP checks that are needed to belong to Eclipse Foundation. This is one of the big differentiators if you care about clean IP.
How many plugins are you using? Hudson supports many of the most important plugins independently and is working with plugin owners to keep compatibility with those that are still maintained by their owners at Jenkins.
See the JavaOne presentations that show how Hudson is being maintained and new features added.
https://oracleus.wingateweb.com/scheduler/eventcatalog/eventCatalogJavaOne.do (search for Hudson)
Also check out the Hudson project at Eclipse http://www.eclipse.org/hudson/

Perl and Ruby modules in the same repository?

I've started working on a new Perl module and I've decided that I want to make a Ruby version of it as well (once I finish the Perl version). Do people tend to make separate repositories for each language? Or put them in the same repository?
I can easily see how the two sets of code are different enough to be treated as separate projects. But at the same time it's the same functionality written in two languages, so from that perspective it seems like a single project with two language ports.
What's considered best practice in this situation?
FWIW, I'm using git.
EDIT: I should be more clear here. These aren't modules in the sense of git submodules. They're modules that will be submitted to CPAN and RubyGems. Users of this project will likely be installing it via cpan or gem and then using/requiring it in the normal fashion.
In the course of my group's research, we have a couple repos, some with different technologies in each. We divide the repos by research question and checkout only the projects we are working on, with the repos having a uniform hierarchal directory structure that is the same for all projects. Since we already know the repo directory structural, running scripts and finding data becomes much easier.
I would recommend taking the same approach. The higher the division between the two technologies, the easier it will be to contribute to one of them without being confused by the presence of the other.
In the end ask yourself this: If I were to add another language, would I still keep it in one repo? If the answer is yes, keep doing what you're doing. If not, keep these libraries in two separate repos and manage the projects and contributers distinctly.
my experience in this kind of case is to have 2 smaller git repos for each of the modules and either cloning one branch into the consumer projects repo makes it quite simple. another way is to create a naked clone from the module's repo into the consumer projects repo, then just keep updating it as each module's development progresses. the consumer project should ignore the injected repos.
once other dev clones module A, and/or B, then he/she can just push to consumer project, as permissions allow. this is either a pro or a con depends.

Resources