L2switch feature is not available in Fluorine, which is the current version of Opendaylight. How does Opendaylight Fluorine prevent Layer 2 loops and floods without the Loop Remover functionality of L2switch?
l2switch is not longer maintained and not included in Fluorine. You'll need to use
older versions of ODL to get it. There may be a way to bring it in to Fluorine, but
I don't know that process. Keep in mind that ODL is not only about openflow or layer
2 loop removal, etc.
Related
I'm working with a Grails application version 2.2.4 and I need a procedure for upgrade to latest version (I hope it can be possible). I have thought as a first step to follow the indications of the official site, but that let me to upgrade to version 3.
I'd like to know if anyone already did it or have experience about that. How long take it?, the process and the main problems.
Many thanks in advance.
I think you need to follow both upgrade instructions. the one for 3.x and the 4.x.
start with the 3.x and them move to the 4.x changes.
Another approach I think may be better is to start an empty 4.x application and then start moving you code there. also check first that all the plugins that you are you sing have 3+ version.
The effort required to upgrade can change massively depending on multiple factors, including the size of the project, the quality of the original code, were plugins used and if so have they been updated or will the functionality need replacing, were deprecated taglibs used, e.g remoteFunction etc. etc.
There is not a great deal of difference between 3.x and 4.x so it makes sense to upgrade to 4.x.
Tackle it in stages from the basis of a new project, attempting to rebuild the project between stages.
Reestablish configuration, you don't have to use application.yaml (the default in 4.x) so can create an application.groovy with the same parameters as per your old project.
Move over domain objects but use a new database URL, compare the schema's between the old db and new db to ensure the database is the same. Unless you don't rely on GORM to recreate/update the schema.
Move over any other source and command objects ensuring the project will build. You may need to modify buildconfig at this stage to bring in dependencies and plugins.
Move over services, ensure all compiles and make sure transactions are behaving as intending.
Move over controllers ensuring any tests run successfully.
Move over the views.
Hopefully if the project is still building at this stage, you can run it!
I'm in need of code that only exists in the master at the moment, and the docs specifically mention a version called 2.0.0.BUILD-SNAPSHOT, but I'll be damned if I can figure out which repository is hosting such a build. Anyone have any clues?
Also, is there any information published about release schedules? I'm loath to develop against the 1.x API because I want the retry and recovery functionality (which doesn't work in either of the 2.0.0 milestone builds, hence my need for a snapshot), but I don't want to commit to an unreleased library without some sense of when a release might happen.
The BUILD-SNAPSHOT you can find in the https://repo.spring.io/snapshot. So, you should configure that for your project.
The next Spring Kafka 2.0.0.RC1 is scheduled for this June 28.
You can find that info on the project page as well: https://github.com/spring-projects/spring-kafka/milestone/20
We are evaluating reactor library for using it in our project. Our project was backed by spring context. So we needed a tool to build event-driven application which has the spring support.
Also our main area of focus was on ability to composing sequence of asynchronous events (Streams & Promises). In certain other use cases we might be in need of publisher/subscriber model or running long running processes asynchronously.
As I was evaluating, I noticed some of the following the latest released reactor-spring version is v1.1.3 with dependency on reactor-v1.1.3 which we could use.
But I also noticed the there’s reactor-v.2.0.0 (under development) which had quite a bit of changes especially in the area of Streams and Promises.
Please suggest me if it is a good idea for going with reactor-v1.1.3 with spring support or should we wait for reactor-v2.0 if we have to use more of Streams and Promises.
If we go ahead with reactor-v1.1.3.RELEASE how much code changes might be needed to get ourselves upgraded to v2.0.0.
I also wanted to check if we have any branch of ‘reactor samples’ for reactor-v1.1.3/v1.1.4. As of now I could see only one master branch available which has been updated to use reactor-v2.0.
Do we have the reactor API for latest release v1.1.4. Currently the API (https://reactor.github.io/docs/api/) points to Reactor 1.1.0 Release.
Where can I find the reactor-core-2.0.0 codebase? I am finding difficult to locate.
As I am new to this library, please feel free to correct me if any of my mentioned points/questions were incorrect. Thanks.
If you're starting new development, definitely go with Reactor 2.0. It's mainly the substantial improvements in the Stream and Promise APIs which resulted in having to bump the major version number. Differences in the rest of the codebase are pretty minimal. Converting between 1.1 code and 2.0 code requires some package renaming and a few tweaks here and there (like eliminating the use of the Deferred object in Stream 1.1).
The other major change that justifies a major point release bump is the implementation of the Reactive Streams specification. It's out of the scope of this question to discuss it further but it is an important part of Reactor moving forward. Being able to natively integrate with Akka Streams, Ratpack, RxJava, and other libraries that already (or will shortly) implement Reactive Streams is a huge benefit to Reactor 2.0.
The reactive-streams branch contains the code for Reactor 2.0. M1 is coming shortly and we'll start the process of updating the samples, though as you've noticed, some components like Spring support have already had to be bumped to Reactor 2.0 since they're relied on in some major almost-production applications.
We have more than 250+ applications and most of the applications depends on the generic/common(around 30) components.
If there is any change in a common component then we have to release all/most of them components using Maven. This is very painful task.
Is there any way by which we can avoid this. Do we need to change the design of the application or does maven provides any solution for this or is there any solution in Hudson by which we can schedule a builds to do this.
Any inputs on this will be really helpful.
You could take a look at the Cascade Release Plugin which might work with Jenkins as well.
However, you should thing about your workflow:
Does every application need to have the newest common components right now? Perhaps it is sufficient to include them in the next regular release?
That way, you turn the responsibilities around. So say your component DOES in indeed do a release. Next time when you applications build (in the regular Nightly build, using SNAPSHOTs), the build process notifies the responsible developer/releasemanager for the application that there is a new version of common.
The release manager can now decide whether the features/updates are necessary for his application. You could even let the nightly build automatically update the dependency to common (using versions-maven-plugin.
The point is: I would stronlgy advise against automatically updating and releasing the application because of common updates. This would create a strong coupling between both projects. Do the update automatically, if you want, but let the application developers/release managers decide when to release.
Of course, if the change in common is critical, they MIGHT need to release immediately.
By default, Go pulls imported dependencies by grabbing the latest version in master (github) or default (mercurial) if it cannot find the dependency on your GOPATH. And while this workflow is quite simple to grasp, it has become somewhat difficult to tightly control. Because all software change incurs some risk, I'd like to reduce the risk of this potential change in a manageable and repeatable way and avoid inadvertently picking up changes of a dependency, especially when running clean builds via CI server or preparing to deploy.
What is the most effective way I can pin (i.e. lock down or capture) a package dependency so I don't find myself unable to reproduce an old package, or even worse, unexpectedly broken when I'm about to release?
---- Update ----
Additional info on the Current State of Go Packaging. While I ended up (as of 7.20.13) capturing dependencies in a 3rd party folder and managing updates (ala Camlistore), I'm still looking for a better way...
Here is a great list of options.
Also, be sure to see the go 1.5 vendor/ experiment to learn about how go might deal with the problem in future versions.
You might find the way Camlistore does it interesting.
See the third party directory and in particular the update.pl and rewrite-imports.sh script. These scripts update the external repositories, change imports if necessary and make sure that a static version of external repositories is checked in with the rest of the camlistore code.
This means that camlistore has a completely repeatable build as it is self contained, but the third party components can be updated under the control of the camlistore developers.
There is a project to help you in managing your dependencies. Check gopack
godep
I started using godep early last year (2014) and have been very happy with it (it met the concerns I mentioned in my original question). I am no longer using custom scripts to manage the vendoring of dependencies as godep just takes care of it. It has been excellent for ensuring that no drift is introduced regardless of timing or a machine's package state. It works with the existing mechanism of go get and introduces the ability to pin (godep save) and restore (godep restore) based on Godeps/godeps.json.
Check it out:
https://github.com/tools/godep
There is no built in tooling for this in go. However you can fork the dependencies yourself either on local disk or in a cloud service and only merge in upstream changes once you've vetted them.
The 3rd party repositories are completely under your control. 'go get' clones tip, you're right, but you're free to checkout any revision of the cloned-by-go-get or cloned-by-you repository. As long as you don't do 'go get -u', nothing touches your 3rd party repositories already sitting at your hard disk.
Effectively, your external, locally cloned, dependencies are always locked down by default.