I'm using Maven's RPM plugin to create an rpm to package up my project (Java, JSP, JavaScript) and that is working fine. I've now had to make some small changes to a JSP page and a JavaScript file. I'd like to be able to create an RPM that just does an update or patch, instead of the whole project. Is that possible? If so, how?
You can't. Or maybe not in the way that you're hoping for.
You can create a delta RPM, but they were designed to save bandwidth when distributing RPMs not to apply patches. i.e. they must be served from a repository and your package manager (YUM/DNF) will take the base which is already installed, the deltas and manufacture a full new RPM and install that.
Note that to keep YUM/DNF happy, you must still supply the full new RPM so that clients that don't have the base RPM installed can do an install.
You would also have to consider what would you do for the next version that comes out? Are you going to deltas from both previous RPMs?
So yes, you can achieve the effect that you're looking for but unless you're really looking to save bandwidth it may not help you. Typically unless your download is slow, it takes longer to reconstitute the new RPM from the base and the delta than it does to download the full version.
Related
I'm installing Drupal 8.x via composer downloading any dependencies from the Internet and all works fine.
In this way however there is no guarantee that the same versions of dependencies will be available every time I install. One server might have an updated version of a module than another Drupal server if I install in different time. I would like to prevent against this by using a local mirror.
Is it possible to provide a local mirror to composer and how?
Any example / reference / suggestions?
If you are worried about the versions, then the best way would be to define the exact versions you want in your composer.json if need. But apart from that, after you install your dependencies, you have a composer.lock file that has the exact versions in it. This file is committed to your version control and used as the base to install: this way you always get the same versions (until you update of course).
A separate problem might be that there is no internet, or the specific versions are not available for some reason. This shouldn't happen (often), but in that case you should pick this up before you 'release'.
The best practice would be to build (finding out if you have all packages available) and then release. You could even create a separate build server that creates your project including the vendor dir, and push from there. The fact that your vendor dir is not in your version control does not mean you have to get all dependencies on your production server each time
This means you have a local copy of your vendor, which is not a local mirror of composer per se, but close enough for comfort.
We recently brought Golang into the company that I work in for general use, but we hit a snag in the roll out because Go can use the go get command to get packages from the internet. Typically when we roll out Java and Python we are able to limit where the developer can pull packages from by pointing them to our internal artifactory.
So with Python we can change where they pull from by altering the pip command to pull from our internal artifactory, and with Java we can alter their settings.xml and pom.xml to point to our internal packages.
I know that during development you can fetch and pull in dependencies into your local then compile them into a standalone binary. What i am looking for is some mechanism that stops people from going out and pulling from the open internet.
Does something like this exist in Go? Can I stop people from going to the internet and go get 'ing packages?
Any help would be greatly appreciated.
It depends on your definition of "roll out", but typically there are three stages:
Development - at this point you can't prevent arbitrary go get calls, apart from putting the development machines behind restrictive proxies or similar technical measures.
Deployment - since Go programs can (should) be deployed as single binaries, go get is not used at all during deployment.
Building deployment artefacts - this is probably your issue:
The usual approach is not to fetch dependencies when building Go programs. Instead, dependencies are fetched during development, and made part of the source tree using vendoring, for example by using the dep tool.
At this point, the build step no longer needs to fetch any dependencies. The choice of which dependencies are allowed now becomes part of the rest of your process, such as code reviews.
We are developing offline due to limited internet resources and would like to run once every several months a whole grab of an external repository (e.g repo1.maven.org/maven2 - Disk space isn't an issue).
Today I'm using a simple POM that contains a lot of common dependencies that we are using, I've set my local maven to use a mirror to proxy thru a local nexus repository to cache locally and this is how I'm grabbing for offline use - but that isn't very effective.
I'm now looking for a command line tool that allow me to run searches on maven repositories so that I can write a script that grab them all to my local nexus installation and would like to hear if there is any or if there is another way to achieve that.
Thanks
Not a whole solution (yet) but I'm using httrack to grab the whole content of repo1.maven.org/maven2 - That is already better than nothing :)
In general, there is a goal in Maven dependency plugin called "go-offline"
So it allows to grab all the project dependencies and to store them in local .m2 repo.
You can find more information here.
If you want to run Maven and tell it to behave like the network does not exist you can run it with "-o" option (offline mode). So that if there is no dependency installed locally, Maven won't even try to go to network and bring it - but will fail the build.
On the opposite, if you want to force Maven to check and bring new versions (otherwise they already should be in your repo), you can use "-U" option.
I'm not really sure I've got the point about general search-and-download use case. Usually people install Nexus or Artifactory once in a network so that each dependency will be downloaded only once. In local development machines people usually just work with filesystem and don't maintain tools like this.
Now if you want to copy the whole repository from internet (for copying it later to some other network or something) you can just use crawlers like Apache Nutch for example or craft your own script that will recursively download all the files.
By default, Go pulls imported dependencies by grabbing the latest version in master (github) or default (mercurial) if it cannot find the dependency on your GOPATH. And while this workflow is quite simple to grasp, it has become somewhat difficult to tightly control. Because all software change incurs some risk, I'd like to reduce the risk of this potential change in a manageable and repeatable way and avoid inadvertently picking up changes of a dependency, especially when running clean builds via CI server or preparing to deploy.
What is the most effective way I can pin (i.e. lock down or capture) a package dependency so I don't find myself unable to reproduce an old package, or even worse, unexpectedly broken when I'm about to release?
---- Update ----
Additional info on the Current State of Go Packaging. While I ended up (as of 7.20.13) capturing dependencies in a 3rd party folder and managing updates (ala Camlistore), I'm still looking for a better way...
Here is a great list of options.
Also, be sure to see the go 1.5 vendor/ experiment to learn about how go might deal with the problem in future versions.
You might find the way Camlistore does it interesting.
See the third party directory and in particular the update.pl and rewrite-imports.sh script. These scripts update the external repositories, change imports if necessary and make sure that a static version of external repositories is checked in with the rest of the camlistore code.
This means that camlistore has a completely repeatable build as it is self contained, but the third party components can be updated under the control of the camlistore developers.
There is a project to help you in managing your dependencies. Check gopack
godep
I started using godep early last year (2014) and have been very happy with it (it met the concerns I mentioned in my original question). I am no longer using custom scripts to manage the vendoring of dependencies as godep just takes care of it. It has been excellent for ensuring that no drift is introduced regardless of timing or a machine's package state. It works with the existing mechanism of go get and introduces the ability to pin (godep save) and restore (godep restore) based on Godeps/godeps.json.
Check it out:
https://github.com/tools/godep
There is no built in tooling for this in go. However you can fork the dependencies yourself either on local disk or in a cloud service and only merge in upstream changes once you've vetted them.
The 3rd party repositories are completely under your control. 'go get' clones tip, you're right, but you're free to checkout any revision of the cloned-by-go-get or cloned-by-you repository. As long as you don't do 'go get -u', nothing touches your 3rd party repositories already sitting at your hard disk.
Effectively, your external, locally cloned, dependencies are always locked down by default.
I am trying to set up a Continuous Integration process. For my various build tasks(compiling, testing, documentation etc.)I need to have tools that perform these tasks(csc, NUnit, NDoc etc.). My question is should these tools too go into my source control repository?
Why I think that they should is because I read in some online article that the developer environment should be as much similar to the build server environment. To fulfill this requirement, the article suggested that you put everything that is required for your build in the repository and when you check out the code(or the build server checks out the code) you are ready to build the project right away without first installing any other tools. But on the other hand if I put these tools with my source code in the repository then the build server will have to install them whenever a build is run.
Is it OK to install these tools? Won't it increase the time for each build unnecessarily?
It's often more trouble than it's worth to try to check in tools to source control. Rather, write a list of software requirements that must be installed before the source can be checked out and built (one thing that would need to be on this list in any case is the source control system itself). If you rely on software being in source control, some tools might need to be installed in certain paths or be otherwise configured (registry entries come to mind).
I would certainly not check in the compiler itself to source control, and I probably wouldn't check in NUnit or NDoc either. Just install these beforehand, as they are not likely to change too much over the lifetime of your project. Your build script might want to check that the expected version(s) of the required software packages are installed before the build may proceed.
Unless you're customizing the tools there's probably no reason to put their source code in your repository. However there are excellent reasons for putting your config files in the repository.
Re-installing the tools for every single build is overkill and will slow you down.
However it's by far better to have a server dedicated to the continuous integration so that you know its state ; you sure nobody installed anything that may have an impact on the outcome of the build.
If you want to be able to re-generate today's build next year, you need to be able to re-create your environment first. Make sure you'll be able to re-install your tools (exact same version), either by keeping them on your server (installing the newer versions in different directories), or storing the whole package in your configuration management tool.
Think about how you would create another continuous integration server, either to have two of them, or for a second site, or to recover after a disaster. Document how the continuous integration server was set up.
What really needs to be version controlled, is the build scripts, that access the right versions of the tools, especially if you opt for installing several versions of the tools.