What to commit and what to ignore when adding a flex project to github?
Keep in mind that I want to share it with others and accept pull requests.
I don't use flex, but here are some general rules for all source control:
Commit:
Human written code
Configuration files
Referenced 3rd party libraries (that are not typically part of the standard environment)
In some cases tools needed to build and run that are not standard (save people hunting and downloading if you can)
Ignore:
Generated code, that can be easily regenerated using a scripts, tools
Generated CSS files if you write SASS/SCSS/LESS instead
Generated JS files if you write Coffeescript instead
Build artifacts, build folders,
Temporary files (e.g., some editors creating working files)
As an addendum for Git, I prefer to keep some non-code artifacts in submodules to avoid polluting the code repository. This can include:
Large assets, images and videos in some cases
Tools and executables (very handy if you reuse these tools for multiple projects)
This is not an exhaustive list and your environment probably dictates some deviation or adjustments here. The first rules in
Related
I have a project that need to create/read protobuf files generated by other projects.
I want dlprimitives to be able to read files formatted as ONNX protobuf and Caffe protobuf
What is the best way to include them into project:
Copy the files from original repo with readme reference to sources for updates
Make caffe/onnx external sub-projects
Download them on demand upon build
My thoughts:
Is plain copy not sure how is it good for updating
Creates huge subprojects and increases clone time for a single file, since it is impossible to have subproject of a signle file
Assumes that build environment has internet access.
What would be better policy? How is it usually solved?
All of your approaches are valid.
Protocol buffers is by design forward- or backward-compatible, so updating manually is fine. You'll can mitigate the issue by documenting the task or providing a script / task.
External sub-projects is also fine, but you'll still have to update them manually (in the case of git submodules) and handle changed paths in the original project. You'll might want to consider contacting the projects and suggest splitting the format descriptions from the main project.
Usually proxies or caches help in that case. Very few projects build nowadays without external dependencies.
In the end you should consider what you intended audience is. Maybe you can get some feedback from them how they would use your project?
ONNX and Caffe are the source-of-truth for the protos and I discourage you making copies of the protos (as the copies would be non-definitive).
To #eik's 2nd point, I think it's a good practice for protobuf maintainers to keep protos in distinct repos so that these repos are easier to integrate. I think this should be considered a best practice but it's rarely done. Occasionally protobuf maintainers generate multi-language sources on each proto change but this is just a fixed-cost saving and you may, of course, generate the SDK yourself at any point.
Neither of your referenced protos appears versioned distinctly from the repo. This should also be a best practice.
You should clone|recreate the 3rd-party proto repos on-demand for your builds, retaining the repo's reference and build clients for whichever languages you need. This keeps the proto contracts (potentially versioned) and sync'd with your (!) copies of generated clients.
Import the client (pacakges) using your language's package import. With Golang, for example, you can add replace in go.mod to redirect from e.g. https://github.com/onnx/onnx to your copy of the generated client.
I'm currently working with a few other developers under Xcode environment that is stored in common remote git repository.
Unlike other project editors such as SlickEditor, Xcode is a full development environment, so the project files should also be kept version control, since they bind the compiling information inside (on other cases, we have the compilation info stored in Makefiles, separated from project structure info that can be kept local.)
The problem arise when merges are needed in the project files. since this files are auto generated and can be modified properly only through Xcode UI, it require manual addition rather then merge (simply add the modification of the project files from the source merge, manually from Xcode UI in target).
my question is whether this approach is inevitable, and if there are more efficient, easier ways to do it ?
The problem arise when merges are needed in the project files. since this files are auto generated and can be modified properly only through Xcode UI
I would advise you not to store auto generated files under version control with Git, or any other VCS. You should only be versioning your source code.
Our company currently uses TFS for source control and build server. Most of our projects are written in C/C++, but we also have some .NET projects and wouldn't want to be limited if we need to use other languages in the future.
We'd like to use Git for our source control and we're trying to understand what would be the best choice for a build server. We have started looking into TeamCity, but there are some issues we're having trouble with which will probably be relevant regardless of our choice of build server:
Build dependencies - We'd like to be able to control the build dependencies for each <project, branch>. For example, have <MyProj, feature_branch> depend on <InfraProj1, feature_branch> and <InfraProj2, master>.
From what we’ve seen, to do that we might need to use Gradle or something similar to build our projects instead of plain MSBuild. Is this correct? Are there simpler ways of achieving this?
Local builds - Obviously we'd like to be able to build projects locally as well. This becomes somewhat of a problem when project dependencies are introduced, as we need a way to reference these resources or copy them locally for the build to succeed. How is this usually solved?
I'd appreciate any input, but a sample setup which covers these issues will also be a great help.
IMHO both issues you mention fall really in the config management category, thus, as you say, unrelated to the build server choice.
A workspace for a project build (doesn't matter if centralized or local) should really contain all necessary resources for the build.
How can you achieve that? Have a project "metadata" git repo with a "content" file containing all your project components and their dependencies (each with its own git/other repo) and their exact versions - effectively tying them together coherently (you may find it useful to store other metadata in this component down the road as well, like component specific SCM info if using a mix of SCMs across the workspace).
A workspace pull wrapper script would first pull this metadata git repo, parse the content file and then pull all the other project components and their dependencies according with the content file info. Any build in such workspace would have all the parts it needs.
When time comes to modify either the code in a project component or the version of one of the dependencies you'll need to also update this content file in the metadata git repo to reflect the update and commit it - this is how your project makes progress coherently, as a whole.
Of course, actually managing dependencies is another matter. Tons of opinions out there, some even conflicting.
I'm switching an old web-application to using Maven rather than Ant. It goes mostly fine, but there is one thing I'm not sure about.
With customly written Ant build file I had a "development deployment mode", where it would symlink certain files (JSP and certain others) rather than copying them. This would result in a very streamlined development procedure: once you have deployment running, you just edit files in your source code checkout directory, and webserver picks up these changes automatically. Basically, you edit something in your editor, save file, and in a few seconds the changes automatically become visible through your browser, without any further steps.
How would I go about implementing something similar with Maven?
While this doesn't seem possible without writing a custom plugin, I found war:inplace goal in maven-war-plugin, which achieves what I want. The only downside is that I have to keep JSP files, JS files, images etc. together in src/main/webapp rather than have them logically separated in e.g. src/main/jsp, src/main/js, but that's not that important.
Suppose I have a project "MyFramework" that has some code, which is used across quite a few solutions. Each solution has its own source control management (SVN).
MyFramework is an internal product and doesn't have a formal release schedule, and same goes for the solutions.
I'd prefer not having to build and copy the DLLs to all 12 projects, i.e. new developers should to be able to just do a svn-checkout, and get to work.
What is the best way to share MyFramework across all these solutions?
Since you mention SVN, you could use externals to "import" the framework project into the working copy of each solution that uses it. This would lead to a layout like this:
C:\Projects
MyFramework
MyFramework.csproj
<MyFramework files>
SolutionA
SolutionA.sln
ProjectA1
<ProjectA1 files>
MyFramework <-- this is a svn:externals definition to "import" MyFramework
MyFramework.csproj
<MyFramework files>
With this solution, you have the source code of MyFramework available in each solution that uses it. The advantage is, that you can change the source code of MyFramework from within each of these solutions (without having to switch to a different project).
BUT: at the same time this is also a huge disadvantage, since it makes it very easy to break MyFramwork for some solutions when modifiying it for another.
For this reason, I have recently dropped that approach and am now treating our framework projects as a completely separate solution/product (with their own release-schedule). All other solutions then include a specific version of the binaries of the framework projects.
This ensures that a change made to the framework libraries does not break any solution that is reusing a library. For each solution, I can now decide when I want to update to a newer version of the framework libraries.
That sounds like a disaster... how do you cope with developers undoing/breaking the work of others...
If I were you, I'd put MyFrameWork in a completely seperate solution. When a developer wants to develop one of the 12 projects, he opens that project solution in one IDE & opens MyFrameWork in a seperate IDE.
If you strong name your MyFramework Assemby & GAC it, and reference it in your other projects, then the "Copying DLLs" won't be an issue.
You just Build MyFrameWork (and a PostBuild event can run GacUtil to put it in the asssembly cache) and then Build your other Project.
The "best way" will depend on your environment. I worked in a TFS-based, continuous integration environment, where the nightly build deployed the binaries to a share. All the dependent projects referred to the share. When this got slow, I built some tools to permit developers to have a local copy of the shared binaries, without changing the project files.
Does work in any of the 12 solutions regularly require changes to the "framework" code?
If so your framework is probably new and just being created, so I'd just include the framework project in all of the solutions. After all, if work dictates that you have to change the framework code, it should be easy to do so.
Since changes in the framework made from one solution will affect all the other solutions, breaks will happen, and you will have to deal with them.
Once you rarely have to change the framework as you work in the solutions (this should be your goal) then I'd include a reference to a framework dll instead, and update the dll in each solution only as needed.
svn:externals will take care of this nicely if you follow a few rules.
First, it's safer if you use relative URIs (starting with a ^ character) for svn:externals definitions and put the projects in the same repository if possible. This way the definitions will remain valid even if the subversion server is moved to a new URL.
Second, make sure you follow the following hint from the SVN book. Use PEG-REVs in your svn:externals definitions to avoid random breakage and unstable tags:
You should seriously consider using
explicit revision numbers in all of
your externals definitions. Doing so
means that you get to decide when to
pull down a different snapshot of
external information, and exactly
which snapshot to pull. Besides
avoiding the surprise of getting
changes to third-party repositories
that you might not have any control
over, using explicit revision numbers
also means that as you backdate your
working copy to a previous revision,
your externals definitions will also
revert to the way they looked in that
previous revision ...
I agree with another poster - that sounds like trouble. But if you can't want to do it the "right way" I can think of two other ways to do it. We used something similar to number 1 below. (for native C++ app)
a script or batch file or other process that is run that does a get and a build of the dependency. (just once) This is built/executed only if there are no changes in the repo. You will need to know what tag/branch/version to get. You can use a bat file as a prebuild step in your project files.
Keep the binaries in the repo (not a good idea). Even in this case the dependent projects have to do a get and have to know about what version to get.
Eventually what we tried to do for our project(s) was mimic how we use and refer to 3rd party libraries.
What you can do is create a release package for the dependency that sets up a path env variable to itself. I would allow multiple versions of it to exist on the machine and then the dependent projects link/reference specific versions.
Something like
$(PROJ_A_ROOT) = c:\mystuff\libraryA
$(PROJ_A_VER_X) = %PROJ_A_ROOT%\VER_X
and then reference the version you want in the dependent solutions either by specific name, or using the version env var.
Not pretty, but it works.
A scalable solution is to do svn-external on the solution directory so that your imported projects appear parallel to your other projects. Reasons for this are given below.
Using a separate sub-directory for "imported" projects, e.g. externals, via svn-external seems like a good idea until you have non-trivial dependencies between projects. For example, suppose project A depends on project on project B, and project B on project C. If you then have a solution S with project A, you'll end up with the following directory structure:
# BAD SOLUTION #
S
+---S.sln
+---A
| \---A.csproj
\---externals
+---B <--- A's dependency
| \---B.csproj
\---externals
\---C <--- B's dependency
\---C.csproj
Using this technique, you may even end up having multiple copies of a single project in your tree. This is clearly not what you want.
Furthermore, if your projects use NuGet dependencies, they normally get loaded within packages top-level directory. This means that NuGet references of projects within externals sub-directory will be broken.
Also, if you use Git in addition to SVN, a recommended way of tracking changes is to have a separate Git repository for each project, and then a separate Git repository for the solution that uses git submodule for the projects within. If a Git submodule is not an immediate sub-directory of the parent module, then Git submodule command will make a clone that is an immediate sub-directory.
Another benefit of having all projects on the same layer is that you can then create a "super-solution", which contains projects from all of your solutions (tracked via Git or svn-external), which in turn allows you to check with a single Solution-rebuild that any change you made to a single project is consistent with all other projects.
# GOOD SOLUTION #
S
+---S.sln
+---A
| \---A.csproj
+---B <--- A's dependency
| \---B.csproj
\---C <--- B's dependency
\---C.csproj