I am writing an application that runs against the octopus 3.0 api and can't find a way around this.
We have dozens of environments that are logically split using a different lifecycle for different versions of our product.
What i want to be able to do is get all environments for version 1 of the product and then do the same for 3,4 etc.
Can anyone point me at the best way of doing this, or any other way of grouping environments?
Related
The question is tied more to CI/CD practices and infrastructure. In the release we follow, we club a set of microservices docker image tags as a single release, and do CI/CD pipeline and promote that version.yaml to staging and production - say a sort of Mono-release pattern. The problem with this is that at one point we need to serialize and other changes have to wait, till a mono-release is tested and tagged as ready for the next stage.A little more description regarding this here.
An alternate would be the micro-release strategy, where each microservice release in parallel through production through the CI/CD pipeline. But then would this mean that there would be as many pipelines as there are microservices? An alternate could have a single pipeline, but parallel test cases and a polling CD - sort of like GitOps way which takes the latest production tagged Docker images.
There seems precious little information regarding the way MS is released. Most talk about interface level or API level versioning and releasing, which is not really what I am after.
Assuming your organization is developing services in microservices architecture and is deploying in a kubernetes cluster, you must use some CD tool (continuous delivery tool) to release new microservices services, or even update a microservice.
Take a look in tools like Jenkins (https://www.jenkins.io), DroneIO (https://drone.io)... Some organizations use Python scripts, or Go and so on... I, personally, do not like this approch, I think the best solution is to pick a tool from CNCF Landscape (https://landscape.cncf.io/zoom=150) in Continuous Integration & Delivery group, these are tools test and used in the market.
An alternate would be the micro-release strategy, where each microservice release in parallel through production through the CI/CD pipeline. But then would this mean that there would be as many pipelines as there are microservices?
It's ok in some tools you have a parameterized pipeline thats build projects based in received parameters, but I think the best solution is to have one pipeline per service, and some parameterized pipelines to deploy, or apply specific tests, archive assets and so on... Like you say micro-release strategy
Agreed, there is little information about this out there. From all I understand the approach to keep one pipeline per service sounds reasonable. With a growing amount of microservices you will run into several problems:
how do you keep track of changes in the configuration
how do you test your services efficiently with regression and integration tests
how do you efficiently setup environments
The key here is most probably that you make better use of parameterized environment variables that you then look to version in an efficient manner. This will allow you to keep track of the changes in an efficient manner. To achieve this make sure to a.) strictly paramterize all variables in the container configs and the code and b.) organize the config variables in a way that allows you to inject them at runtime. This is a piece of content that I found helpful in regard to my point a.);
As for point b.) this is slightly more tricky. As it looks you are using Kubernetes so you might just want to pick something like helm-charts. The question is how you structure your config files and you have two options:
Use something like Kustomize which is a configuration management tool that will allow you to version to a certain degree following a GitOps approach. This comes (in my biased opinion) with a good amount of flaws. Git is ultimately not meant for configuration management, it's hard to follow changes, to build diffs, to identify the relevant history if you handle that amount of services.
You use a Continuous Delivery API (I work for one so make sure you question this sufficiently). CDAPIs connect to all your systems (CI pipelines, clusters, image registries, external resources (DBs, file storage), internal resources (elastic, redis) etc. They dynamically inject environment variables at run-time and create the manifests with each deployment. They cache these as so called "deployment sets". Deployment Sets are the representation of the state of an environment at deployment time. This approach has several advantages: It allows you to share, version, diff and relaunch any state any service and application were in at any given point in time. It provides a very clear and bullet proof audit auf anything in the setup. QA environments or test-feature environments can be spun of through the API or UI allowing for fully featured regression and integration tests.
I work devops for a fairly large company that is in process of transitioning to microservices. This is a new area for most people involved and some of the governing requests seem like bad practice to me but I don't have the expertise to convince otherwise.
The request is to generate a report before deploying that would list any new api/events (Kafka is our messaging service) in a microservice.
The path that's being recommended is for devs to follow a style guide and then scrape the source code during CI/CD pipeline to generate a report that can be compared to previous reports and identify any new apis.
This seems backwards and unsustainable but I've been unable to find another solution that would satisfy their requests. I've recommended deploying to dev first, then using a tracing tool to identify any api changes, or event subscriptions, but they insist on having the report before deploying.
I'm hoping for any advice on best practice to accomplish this.
Tracing and detecting version changes is definitely over engineering. Whats simpler like #zenwraight has mentioned, is to version your APIs. While tracing through services to explore the different versions and schema could be a potential solution, it requires a lot more investment upfront and if thats not the bread and butter of the company, I would rather use a vendor product that might support something like this.
If discovery is a mechanism that is needed, I would recommend something that publishes internal API docs using a tool like Swagger so that you can search if there's an API you can consume.
And finally to support moving to different versions, I would recommend having an API onboarding process for the services so that teams can notify other teams that are using specific versions their services are coming to the end of their lifecycle and they will need to migrate to newer ones.
I decided to move to MicroService architecture, divide a project into multiple services and run those services on DCOS.It really gives a good story to project deployment and maintenance. But it makes development process complex.
For the developer, it was easy to run the application locally while implementation is in progress.Now the project is divided into multiple services and runs on DCOS which require good configuration. so to test application for the developer in the middle of implementation becomes a nightmare.
Guys, anyone is using DCOS with Microservice, can you please suggest what process you are following for internal development.
DCOS is just a tool for your deployment, so in order to give a better answer you'd have to share more about your technology stack in your question. But here some general thoughts: There are different types/levels of testing and there are different considerations for each.
On unit level - This depends on what technology you use for implementing your services and is the reason why languages like go become more and more popular for server development. If you use go for example, you can easily run any service you are currently developing locally (not containerized) on the dev machine. And you can easily run attached unit tests. You would either run dependent services locally or mock them up (probably depending on effort). You may also prefer asking each service team to provide mock services as part of their regular deliveries.
And you will require special environment settings and service configuration for the local environment.So summarized this approach will require you to have means in place to run services locally and depending on the implementation technologies you use it will be easier or harder.
On deployment/integration level - Setting up a minimal cluster on the dev's local machine and/or using dedicated testing- and staging clusters. This allows you to test the services including containerization and in a more final deployment environment with dependencies. You would probably write special test clients for this type of test. And this approach will also require you to have separate environment settings, configuration files, etc. for the different environments. Tools like Jaeger become more popular for helping with debugging errors through multiple services here.
Canary testing. Like the name suggests - You deploy your latest service version to a small portion of your production cluster in order to test it on a limited number of users first before rolling it out to the masses. In this stage you can run user level tests and in fact your users become the testers, so it is to be used carefully. Some organizations prefer to have special beta-type-users that will only get access to those environments.
We are struggling with trying to figure out the best approach for updating processor configurations as a flow progresses through the dev, test, and prod stages. We would really like to avoid manipulating host, port, etc. references in the processors when the flow is deployed to the specific environment. At least in our case, we will have different hosts for things like ElasticSearch, PostGres, etc. How have others handled this?
Things we have considered:
Pull the config from a properties file using expression language. This is great for processors that have EL enabled, but not the case for those where it isn't.
Manipulate the flow xml and overwrite the host, port, etc. configurations. A bit concerned about inadvertently corrupting the xml and how portable this will be across NIFI versions.
Any tips or suggestions would be greatly appreciated. There is a good chance that there is an obvious solution we have neglected to consider.
EDIT:
We are going with the templates that Byran suggested. They will definitely meet our needs and appear to be a good way for us to control configurations across numerous environments.
https://github.com/aperepel/nifi-api-deploy
This discussion comes up frequently, and there is definitely room for improvement here...
You are correct that currently one approach is to extract environment related property values into the bootstrap.conf, and then reference them through expression language so the flow.xml.gz can be moved from one environment to the other. As you mentioned this only works well with properties that support expression language.
In order to make this easier in the future, there is a feature proposal for an idea called a Variable Registry:
https://cwiki.apache.org/confluence/display/NIFI/Variable+Registry
An interesting approach you may want to look at is using templates. There is a GitHub project that can be used to help with this:
https://github.com/aperepel/nifi-api-deploy
You can loook at this post automating NIFI template deployment
For automating NIFI template deployment, there is a tool that works well : https://github.com/hermannpencole/nifi-config
Prepare your nifi development
Create a template on nifi
and download it
Extrac a sample configuration with the tools
Deploy it on production
undeploy the old version with the tools
deploy the template with the tools
update the production configuration with the tools
I have a question, with Spring Profiles. I understand the reason for not using maven profiles because each environment would require another artifact. That makes since. I modified my code to use Spring Profile but the problem I have with Spring Profiles is that it requires you to have a database.property file for each environment, on the server. I have this setup, same setup everyone has seen a hundred times.
src
- main
- resources
-conf
myapp.properties
-env
dev.db.properties
test.db.properties
prod.db.properties
The problem I think with this setup is that, each server would have all the files in the env dir (i.e. dev would have prod.db.properties and test.db.properties files on its server). Is there a way to only copy the files that are needed during the build of maven without using profiles? I haven't been able to figure out a way. If that is the case, then this would seem like a reason to use maven profiles. I may have missed something. Any thoughts would be greatly appreciated.
This seems like a chicken and egg problem to me. If you want your artifact to work on all these 3 environments you need to ship the 3 configurations. Not doing so would lead to the same issue you mentioned originally. It's generally a bad practice to build an artifact with certain coordinates differently according to a profile.
If you do not want to ship the configuration in the artifact itself, you could externalize the definition either through the use of system property or by locating a properties file at a defined place (that you could override for convenience).
You should first point out what your application really is: If you are running "an application in different environments" or if you are running "different applications in their own propritary environments". This are two slightly different concepts:
If you are running an application in different environments its better to put all property files into your jar. Bring it into mind by imagine that you buy a new SUV; you first drive it on a test track, then after on ordinary highways before going offroad to finally enjoy its offroad capabilities. You always use one and the same car in different environments with all its capabilities and driving characteristics. In each environment the car adapts its behaviour and driving characteristics. If you use one application to drive it through different environment, so use the first approach to build all environment-characteristics into one jar.
On the other hand you can also use slightly different cars in different environments. So if you need different cars with its own special driving characteristics for different environments, maybe 4WD or special flood ligths because you are driving by night, you should take the second approach. Back to application: If you need different applications with far different characteristics in different production environments its better to build each application only with the properties it really needs.
Finally you can also merge the two approaches:
my-fun-application-foo.jar for customer foo with properties for test, integration and production environment.
my-fun-application-2047.jar for customer 2047 with properties for test, pre-integration, integration, pre-production and production environment.
Now you should get also an understanding why you shouldn't using profiles for building an application with different flavours.