Userfrosting best practice for helper functions - userfrosting

What would be the best practice to have custom code (library of functions) in a project using userfrosting?
As of now, I modify existing userfrosting controllers, which bloats the nice concise code.
I guess there is a nice way to keep custom functions in a place, which will not interfere with Userfrosting's code and thereby not be affected much during userfrosting upgrades.
At the moment, i'd like to have some custom functions for notifications, barcode etc.
Guess using a vendor folder under composer would be ideal? If so, how to go about it?
Does userfrosting have any extensibility like symfony?
Any help / pointer is appreciated!
Thanks!

As of version 0.3.1, there is no clean way to separate the core shipped code from developer-implemented code. For minor updates within a version (so, hotfixes to 0.3.1), the best way to keep up-to-date is by using git to make your project a fork of the UserFrosting repository.
So for example, you might have spurgeon/brood-crm (your project repo) as a fork of userfrosting/UserFrosting. You can then set userfrosting/UserFrosting as an upstream remote for your repo. Whenever a hotfix is released for userfrosting/UserFrosting, you can sync your fork with the upstream. This will pull changes to the main repo into your project, and give you a chance to resolve any merge conflicts (hopefully, there won't be any).
For people who are not familiar with the distinction between git and GitHub, I should point out that you can do all of this locally, without publishing your fork on GitHub.
UserFrosting 4 will (finally) have a modular, fully extendable design. Rather than having to directly modify the shipped code, you will be able to override the core routes, templates, schema, assets, etc in a separate directory. Upgrading from version 0.3.x to version 4, however, will probably need to be done manually.

Related

How to version products inside monorepo?

I have been educating myself about monorepos as I believe it is a great solution for my team and the current state of our projects. We have multiple web products (Client portal, Internal Portal, API, Core shared code).
Where I am struggling to find the answer that I want to find is versioning.
What is the versioning strategy when all of your projects and products are inside a monorepo?
1 version fits all?
Git sub-modules with independent versioning (kind of breaks the point of having a mono repo)
Other strategy?
And from a CI perspective, when you commit something in project A, should you launch the whole suite of tests in all of the projects to make sure that nothing broke, even though there was no necessarily a change made to a dependency/share module?
What is the versioning strategy when all of your projects and products are inside a monorepo?
I would suggest that one version fits all for the following reasons:
When releasing your products you can tag the entire branch as release-x.x.x for example. If bugs come up you wouldn't need to check "which version was of XXX was YYY using"
It also makes it easier to force that version x.x.x of XXX uses version x.x.x of YYY. In essence, keeping your projects in sync. How you go about this of course depends on what technology your projects are written in.
And from a CI perspective, when you commit something in project A, should you launch the whole suite of tests in all of the projects to make sure that nothing broke, even though there was no necessarily a change made to a dependency/share module?
If the tests don't take particularly long to execute, no harm can come from this. I would definitely recommend this. The more often your tests run the sooner you could uncover time dependent or environment dependent bugs.
If you do not want to run tests all the time for whatever reason, you could query your VCS and write a script which conditionally triggers tests depending on what has changed. This relies heavily on integration between your VCS and your CI server.

Git: How to share only selected folders and files from a repository to allow 2 teams to collaborate on it, but not share the whole code base?

Let say there is a team working on main Git repository using branching model. Now a second team joins and is starting to work a subset of a project. As the starting point they need to collaborate on one folder from the repository. They are not allowed to see rest of the code base. What is the best way to achieve that?
Going forward they would need to be able to merge their changes into the main code base and get any updates from that one folder along the way too.
This is all based on Windows OS with Atlassian Stash and Git on internal network.
That would mean that one folder needs to be its own repo:
added as a submodule (tracking a branch) in the main Git repo
forked by the second team, for them to push to the fork and make PR or synchronize from the original folder git repo.
I would suggest separating the subset into a sub-project, and use language-specific ways to deliver it during building of you main project. For example, if you use MS Visual Studio, you could turn it into library or module and use nuget to deliver it during build of your main project.
In my experience it appears to be much more convenient than using submodules, when it comes to merging.
Another reason to do that - and maybe even more important one - that the other team would be able to handle the project as a compilable and testable unit, instead of a pile of source files.

What is the most effective way to lock down external dependency "versions" in Golang?

By default, Go pulls imported dependencies by grabbing the latest version in master (github) or default (mercurial) if it cannot find the dependency on your GOPATH. And while this workflow is quite simple to grasp, it has become somewhat difficult to tightly control. Because all software change incurs some risk, I'd like to reduce the risk of this potential change in a manageable and repeatable way and avoid inadvertently picking up changes of a dependency, especially when running clean builds via CI server or preparing to deploy.
What is the most effective way I can pin (i.e. lock down or capture) a package dependency so I don't find myself unable to reproduce an old package, or even worse, unexpectedly broken when I'm about to release?
---- Update ----
Additional info on the Current State of Go Packaging. While I ended up (as of 7.20.13) capturing dependencies in a 3rd party folder and managing updates (ala Camlistore), I'm still looking for a better way...
Here is a great list of options.
Also, be sure to see the go 1.5 vendor/ experiment to learn about how go might deal with the problem in future versions.
You might find the way Camlistore does it interesting.
See the third party directory and in particular the update.pl and rewrite-imports.sh script. These scripts update the external repositories, change imports if necessary and make sure that a static version of external repositories is checked in with the rest of the camlistore code.
This means that camlistore has a completely repeatable build as it is self contained, but the third party components can be updated under the control of the camlistore developers.
There is a project to help you in managing your dependencies. Check gopack
godep
I started using godep early last year (2014) and have been very happy with it (it met the concerns I mentioned in my original question). I am no longer using custom scripts to manage the vendoring of dependencies as godep just takes care of it. It has been excellent for ensuring that no drift is introduced regardless of timing or a machine's package state. It works with the existing mechanism of go get and introduces the ability to pin (godep save) and restore (godep restore) based on Godeps/godeps.json.
Check it out:
https://github.com/tools/godep
There is no built in tooling for this in go. However you can fork the dependencies yourself either on local disk or in a cloud service and only merge in upstream changes once you've vetted them.
The 3rd party repositories are completely under your control. 'go get' clones tip, you're right, but you're free to checkout any revision of the cloned-by-go-get or cloned-by-you repository. As long as you don't do 'go get -u', nothing touches your 3rd party repositories already sitting at your hard disk.
Effectively, your external, locally cloned, dependencies are always locked down by default.

Perl and Ruby modules in the same repository?

I've started working on a new Perl module and I've decided that I want to make a Ruby version of it as well (once I finish the Perl version). Do people tend to make separate repositories for each language? Or put them in the same repository?
I can easily see how the two sets of code are different enough to be treated as separate projects. But at the same time it's the same functionality written in two languages, so from that perspective it seems like a single project with two language ports.
What's considered best practice in this situation?
FWIW, I'm using git.
EDIT: I should be more clear here. These aren't modules in the sense of git submodules. They're modules that will be submitted to CPAN and RubyGems. Users of this project will likely be installing it via cpan or gem and then using/requiring it in the normal fashion.
In the course of my group's research, we have a couple repos, some with different technologies in each. We divide the repos by research question and checkout only the projects we are working on, with the repos having a uniform hierarchal directory structure that is the same for all projects. Since we already know the repo directory structural, running scripts and finding data becomes much easier.
I would recommend taking the same approach. The higher the division between the two technologies, the easier it will be to contribute to one of them without being confused by the presence of the other.
In the end ask yourself this: If I were to add another language, would I still keep it in one repo? If the answer is yes, keep doing what you're doing. If not, keep these libraries in two separate repos and manage the projects and contributers distinctly.
my experience in this kind of case is to have 2 smaller git repos for each of the modules and either cloning one branch into the consumer projects repo makes it quite simple. another way is to create a naked clone from the module's repo into the consumer projects repo, then just keep updating it as each module's development progresses. the consumer project should ignore the injected repos.
once other dev clones module A, and/or B, then he/she can just push to consumer project, as permissions allow. this is either a pro or a con depends.

Updating multiple projects using svn:externals

Overview
I am using VisualSVN in Visual Studio, VisualSVN Server on Windows, and of course, TortoiseSVN. I wanted to know what the best method of sharing multiple projects over multiple solutions was, and if there was a better method.
Layout
My Repository kind of looks like this (not their real names):
Library.Common
Library.Web
Library.DB
Library.CMS
Customer1.Site
Customer2.Site
Process
To create a new site that contains common projects:
Create Repository in SVN-Server, e.g. "Customer3.Site"
Create Web site using Visual Studio 2008, named "Customer3.Site", VisualSVN used to commit to the repository created in step (1).
Edit properties of Customer3.Site and specify the necessary projects as svn:externals, e.g. "Library.Common", "Library.DB", etc.
Perform an update, to get these external projects, and add them to my solution in Visual Studio, add the necessary references to the Customer3.Site web project and hit build.
So far so good.
The Problem
All this works fine, I am happy that if I have to modify any of the core Library projects I can do so right in the same environment and commit them to the repository. As more and more customer sites are built, I will then have to keep track of what I've done and remember to SVN Update and rebuild those sites which seems quite a long-winded task.
Is there a better way of doing this, a more best-practice solution? Am I breaking any fundamental SVN laws by doing it this way? I want to find a good solution that doesn't cost too much time and isn't overly complex either.
I've been facing a similar issue ... I am setting up a base install package for WordPress, something we would use to quickly get a site setup, it contains the core of wordpress + a set of baseline plugins, both third party and custom ones we've created. Everything pretty much comes from SVN.
Different plugins have different versions/tags and to setup an svn external pointing to a specific tagged version per project would be a nightmare ... only to then have to go into each and every project and do a property adjustment and then an update.
What going to be implementing is a vendor branch with specific versions as needed. All I should then have to do is update the client sites, since they will always be pointing to the latest versions (under my control in the vendor branches).
As to your problem, and also in my case: I would probably write a commit script to update all projects automatically when something in the vendor branch is updated.

Resources