how to build and run the light-portal from light-4j platform - light-4j

I am a developer from who wants to create an ecosystem around micro services. My research led to your software projects, which are outstanding in many respects.
Unfortunately, one of the components I couldn't get running for an initial review was the portal.
The build failed due to a missing light-4j version (1.5.29).
The light-4j master branch is at 1.5.23, hence I checked out the portal at a version that meets the light-4j version. With this the docker-compose-hybrid.yml script failed, due to other missing libraries. Considering the fact that I even reverted to an older version of the portal sources, I am almost sure that I am on the wrong track.
Do you have any advice for me how to get this solved?
Thank you in advance.

Thanks a lot for your interest in the light platform. The light-portal is still in heavy development in develop branch which is depending on the develop branch of light-4 and other libraries. The easiest way to build it is through light-bot which is our own DevOps tool for microservices as you can see you are dealing with too many dependencies and most DevOps tools on the market can only deal with one repository each time.
https://github.com/networknt/light-config-test/tree/master/light-bot/develop-build/build-portal
Also, please be aware that light-portal services are built on top of light-hybrid which is a serverless framework. The build process just creates small jar files and copy to the read and write service folders. You need to start a compose to start the two services to load all the services.
The following the docker-compose to start the light-portal locally.
https://github.com/networknt/light-config-test/tree/master/light-portal
I am starting to write a light-portal tutorial but there are still a lot of topics are missing. Please let me know if you see any gaps so that I can add more info into it.
https://doc.networknt.com/tutorial/portal/
The mail might not be the best channel as the communication is on a private channel. In the future, you can ask questions on gitter as other people might know the answer when our team members are not available immediately. Also, the answers on the public channel might help other users to learn the platform.
https://gitter.im/networknt/light-4j

Related

Continuous Integration/ Delivery Tools (EOL Bamboo)

I've been looking through the site and I have found some information with regards to this topic, but most of the information is old and possibly outdated.
example: Continuous Integration tools
We are: We're a SaaS product with a microservice (200+) architecture.
We have: We currently do our building through bamboo, and we use nexus as an artifact manager with proper versioning. We deploy those artifacts using bamboo to many different machines. For our frontend deployment we build our code through continua and use AWS codedeploy to handle the deployment. We use Bitbucket and Jira for our development. We have done a POC with bitbucket pipelines but we were lacking proper version management there as well as proper environment management. Setting up 10 servers for every repository manually is just something that we don't want to do.
We want: Since bamboo is EOL next year and since there are many alternatives with different levels of complexity we are currently unsure about the tools that are most suited to our needs. We are currently running everything on dedicated linux machines, but we want to switch to docker containers in AWS in the near future. Support for running gulp scripts etc. would be great since that could help us move from continua and bamboo to one single solution.
The setup of bamboo has been a struggle in the past due to difficulties with the software itself. A nice balance between features and complexity would be best. Does anybody have experience with one or more of the options out there? Some that come to mind are CircleCi, teamCity, GitLab, Jenkins and AWS codePipeline.
Many thanks,
Kenny
Bamboo doesn't EOL next year, but Atlassian forces to switch from perpetual licenses to DC licences to be renewed every year. You can get discount prices when switch to Server to DC licenses. See details at https://www.atlassian.com/licensing/data-center
I would propose Kraken CI. It is open-source and can work on-premise but in the cloud as well. In the cloud, it has support for AWS and Azure, and can do autoscaling depending on a number of tasks.
If you are interested please contact me.

How to publish a Maven project

I am developing a Java framework/API to solve a problem at a client. The code/idea is my property (not the client's). I think it might be useful for others, so I would like to publish it as a open source project.
By publishing I mean bringing it out in the open - making it available as a Maven project.
I can think of conforming to Maven structure, proper documentation/example usage available on a web site, and unit tests, maybe some code coverage threshold.
But does it have to be run by some committee? Do I have to present it to somebody? What steps do I need to take to eventually have it available as a Maven dependency?
There's no committee or approval process that I know of. All you have to do is put your code into a public Github repo. This is how open source software works.
Per Kapep's excellent suggestion below, you have to choose a license as well. Apache, Creative Commons, Gnu, MIT - these are a few of your choices. Know what they mean before you decide.
Your problem begins on that day - you'll have to make others aware of it and see if it's adopted by others. If it's good, you'll have the nice problems of dealing with a user base and having others change your code. If not, it'll languish in the repo.

TFS Team Build - Testing to Production

I have scoured the internet to find out what I can on this, but have come away short. I need to know two things.
Firstly, is there a best practice for how TFS & Team Build should be used in a Development > Test > Production environment? I currently have my local VS get the latest files. Then I work on them & check them in. This creates a build that then pushes the published files into a location on the test server which IIS references. This creates my test environment. I wonder then what is the best practice for deploying this to a Live environment once testing is complete?
Secondly, off the back of the previous - my web application is connected to a database. So, the test version will point to a test database. But when this is then tested and put live, I will need that process to also make sure that any data connections are changed to the live database.
I am pretty much doing all this from scratch and am learning as I go along.
I'd suggest you to look at Microsoft Release Management since it's the tool that can help you to do exactly the things you mentioned. It can also be integrated with TFS.
In general, release management is:
the process of managing, planning, scheduling and controlling a
software build through different stages and environments; including
testing and deploying software releases.
Specifically, the tool that Microsoft offers would enable you to automate the release process, from development to production, keeping track of what and how everything is done when a particular stage is reached.
There's an MSDN article, Automate deployments with Release Management, that gives a good overview:
Basically, for each release path, you can define your own stages, each one made of a workflow (the so-called deployment sequence) containing the activities you want to perform using pre-defined machines from a pool.
It's possible to insert manual interventions/approvals if necessary, and the whole thing can be triggered automatically once your build is done.
Since you are pretty much in control of the actions performed on each machine in each stage (through the use of built-in or custom actions/components) it is also certainly possible to change configuration files, for example to test different scenarios, etc..
Another image to give you and idea of how it can be done:

Dynamic CRM 2011 5 developers 5 databases - how to sync solutions

We are 5 developers working today with 1 database.
We have always one ASYNC service working in order to allow debugging, it means that when a developer wants to debug async, he announce to the others that he is hijacking the async service to his machine till he finishes the debugging.
We want to switch to a database per developer, there are a lot of issues with that, for example syncing schema changes / solutions with other programmers/
What is the best practice with large team of developers, is there any tool / methodology that is best for large teams.
Also, in general, what is the best practice for large teams developing Dynamic CRM 2011.
Thanks
Typically, I have worked/advised the following:
All devs work on their own virtual system. Much easier debugging. No trampling on or coordinating with others. I use VirtualBox.
Work is exported (unmanaged solutions) into a common build system.
Work is merged into the relevant managed solution(s) in build.
Managed solution(s) exported from build and applied to test / uat / pre-production etc.
Managed solution(s) applied to production environment.
Highly recommended reference: Microsoft released a very thorough whitepaper on Lifecycle management. Read about it here.
A typical development flow could be
Developers develop against their own personal development organization (Online/On-premise), in a solution with the same publisher / name
They export the developer solution
They unpack the zip file into the XML structure
And check it into source control, merging it with the master version
A typical deployment into the integration organization could be
Get a latest version of the XML structure from source control
Package it into a .zip solution
Import it into the integration organization
This way, you have a full history of all changes, linked to the developers, and you can make controlled merges, using merging tools you're familiar with.
A developer can always get a latest version from source control, package it and deploy it in his own development organization.

What's the workflow of Continuous Integration With Hudson?

I am referred to Hudson today.
I have heard about continuous integration before, but I have no idea what the heck is a ci-server.
Hudson is really easy to install in Ubuntu and in several minutes I managed to set up an instance of it.
But I don't quite understand the workflow of a ci-server, or how am I supposed to use it?
Please tell me if you have experience about ci, thanks in advance.
Edit:
I am currently using Mercurial as my SCM, and I wonder what is the right way to use it with Hudson.
I have installed the Mercurial Plugin of Hudson, and I create a new job with a local repository. When I commit in the repository the Hudson job is built with the latest version of my source code.
If what I used is a remote repository, what's the workflow like?
Is it something like the following?
Set up a Hudson job with the repository
Developer makes a local clone of the repository
Developer commit and push changes
The remote repository update with the incoming changeset
Run a Hudson build
There may be something I misunderstanded at all, please help me point it out.
Continuous Integration is the process of "integrating software" continuously i.e. as frequently as possible (ultimately after each set of changes) to avoid any big-bang integration and all subsequent problems by getting immediate feedback.
To implement Continuous Integration, you first need to automate the build of your software (where build means of course compiling sources, packaging them, but also compiling tests, running the tests, running quality checks, etc, anything that will help to get feedback on the health of your code). Then you need to trigger the build on the latest version of the sources on a particular event (a change in the repository, a temporal event), to generate reports and to send notifications upon failure (by mail, twitter, etc).
And this is precisely the responsibility of a CI engine: offering trigger mechanisms, being able to get the latest version of the sources, running the build, generating and publishing reports, sending notifications. CI engines do implement this.
And because running a build is CPU and Disk intensive, CI engines usually run on a dedicated machine (or even a farm of machines if you want to build lots of projects).
Back to your question now. Once you've got Hudson running, configure it (Manage Hudson > Configure System): setup the JDK, build tools, etc. Then setup an Hudson Job and follow the steps: configure the location of the source repository, the build tool, the trigger, a notification channel and you're done (you can do more complex things but that's a start).
For more details on the setup, check:
The official Use Hudson guide for more details. << START HERE
Continuous Integration with Hudson - Tutorial.
Spot defects early with Continuous Integration.
Martin Fowler's overview of continuous integration is one of the canonical references. In my opinion, using automation to make sure your code base is healthy is one of the most useful things that you can set up.
Update Sorry that I didn't have much time earlier to expand on my reply. #Pascal_Thivent is right that in order to effectively use CI, you need to be able to automate your builds, tests, etc. CI is actually a good forcing function for this. For me, it's one of those little warning flags if I start to think that it would be too painful to put a build into Hudson. It means that something is not quite right.
What I like about Hudson is that it's flexible enough to accommodate different workflows. We use it for both builds / unit tests and releases. And it eliminates a lot of the worry about certain release procedures only working in one person's environment.
What I don't like about Hudson is that it is occasionally unstable when new builds break plugins. I've had a couple of upgrades (2 out of 10 or so) go bad because of incompatibilities. I do two things now:
I never upgrade my team's Hudson server to the latest and greatest right away. I generally only upgrade when there are significant new features, or bug fixes.
I now have a basic Hudson instance set up with all my plugins on a virtual machine with some dummy builds that I fire up to test out any new upgrades before doing it on the public server.

Resources