From what I can tell after reading IBM documentation is that, after creating a component in UCD, you'd have to manually map that component to an available Resource/Agents that you had already setup.
The way I have my pipeline setup is that, my Jenkins job is the one that's creating components. So in other words, on UCD, I have the application, environment, agents, etc... all that set up, but no components are created -- because my Jenkins job (it's really a Jenkinsfile) is the one that's creating components.
But in order to do a successful deployment, one of the things you have to do is map this component to an Agent. I don't want to have to log back into UCD to manually map this recently created component to one of the available Agents.
When Jenkins is creating the components, it's referring to an already-defined Component Template in UCD to create the components. In the Component Template, I can specify a Component Process. I'm suspecting that in this process, I can specify a step to map the current component to an Agent, but I'm not able to figure this out.
I may have found the answer - you can set a component tag during jenkins deploy job (you can pass these properties as a parameter).
You can setup component tags in agents. If the value of the component tag in agent matches with the tag in the component, that component can be deployed to a VM using that agent with the matching component-tag.
Related
I have a nifi flow it has keeps some state with the ListS3 processor.
I have a dev instance and a prod instance.
I want some options of deploying from dev to prod where the state is kept and where I don't manually have to go in and change all the processor's and process groups.
It seems like this can't be done with templates? Based on the following stackoverflow question:
how does NIFI listfile maintains its timestamp?
edit:
Just so there is no misunderstanding I want to keep prod state when deploying.
It sounds like you aren't using NiFi registry, so you're downloading a flow template and then importing it. This can't preserve state, as it's not the same flow.
You should be using NiFi Registry to version control your flows, which supports this Dev -> Prod workflow.
Build your flow in Dev NiFi, version to Registry.
In prod, add a new Process Group and select the Import option when it asks you for a name. You'll be able to pick your versioned flow.
Run your flow so that it stores some state. View the processors state to verify.
Now update the flow in Dev, and commit the local change to Registry.
Then, update the flow in Prod to the latest version from Registry. It will preserve state on the stateful processor.
For detailed steps on installing & using Registry, see these links:
https://nifi.apache.org/docs/nifi-registry-docs/html/getting-started.html
https://pierrevillard.com/2018/04/09/automate-workflow-deployment-in-apache-nifi-with-the-nifi-registry/
https://alasdairb.com/2021/03/22/nifi-in-production-nifi-registry/
https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.2.0/versioning-a-dataflow/content/connecting-to-a-nifi-registry.html
https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.4.0/getting-started-with-nifi-registry/content/import-a-versioned-flow.html
https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.4.0/getting-started-with-nifi-registry/content/save-changes-to-a-versioned-flow.html
https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.4.0/getting-started-with-nifi-registry/content/start-version-control-on-a-process-group.html
Is it possible to take the existing services in azure cloud subscription as reference and create similar services with parameters update in another subscription either by using powershell or ARM template.
We are missing few details while taking the reference details manually and then creating it using ARM templates. We wanted it to be end to end automation.
You can export the ARM template from existing resources using Export-AzureRmResourceGroup or Save-AzureRmResourceGroupDeploymentTemplate (https://azure.microsoft.com/en-us/blog/export-template/) and then redeploy that template to a new environment.
However, if you are using Export-AzureRmResourceGroup to try to dynamically create an ARM template from existing resources then the generated template will likely not be ready to automatically redeploy. There may be issues with resource dependencies, resources not getting exported correctly, template limitations, etc. It will take a fair bit of manual effort to update the generated ARM template to get it to a point where it can be correctly redeployed into another subscription.
If you are able to use Save-AzureRmResourceGroupDeploymentTemplate (ie. if your existing resources were all deployed via ARM templates with no post deployment ad-hoc changes) then the templates should be ready to deploy.
For future reference, the best solution is to always deploy all of your resources via ARM templates (or something like Terraform) where your configuration is all saved in a source repository and you are deploying via a CI/CD pipeline.
On Jelastic, I created a node for building an application (maven), there are several identical environments (NGINX + Spring Boot), the difference is in binding to its database and configured SSL.
The task is to ensure that after building the application (* .jar), deploy at the same time go to these several environments, how to implement it?
When editing a project, it is possible to specify only one environment, multi-selection is not provided.
it`s allowed to specify just one environment
We suggest creating a few environments using one Repository branch, and run updates by API https://docs.jelastic.com/api/#!/api/environment.Vcs-method-Update pushing whole code to VCS.
It's possible to use CloudScripting technology for attaching custom logic to onAfterBuildProject event and deploying the project to additional environments after build is complete. Please check this JPS as an example of the code syntax. Most likely you will need to use DeployProject API method.
We are using templates to package up some data transfer jobs between two nifi clusters, one acting as a sender, the other as the receiver. One of our jobs contains a remote process group and all worked fine at the point the template was created.
However when we deploy the template through our environments (dev, test, pre, prod), it is tedious and annoying to have to manually delete and a recreate a remote process group in the user interface. I'd like to automate this to simplify deploying templates and reduce the manual intervention.
Is it possible to update a remote processor group and its port configuration through the rest-api ?
Do I just use the REST api to create a new RPG with the correct configuration ?
Does anyone have any experience with this?
There is a JIRA to address this issue [1] which will be worked in conjunction with some of the ongoing Flow Registry (SDLC for flows) efforts. Until then, the best option would be (2) above.
[1] https://issues.apache.org/jira/browse/NIFI-4526
Suppose I deploy an Azure role supplying a service package and a service configuration. Then I change the configuration one or more times without redeploying the role.
Is it possible to get the initial configuration?
The RoleEnvironment API only reflects the current values.
You could handle the RoleEnvironment.Changing event and keep track of the configuration changes from there.
You can change the service configuration in a number of ways:
Using the management portal
Click on the deployment (it must in Ready state!)
Click on "Configure" - edit the configuration.
Using the Manamgement REST API's Change Deployment Configuration method.
If you go for second option, you can either create your own classes or use, for example this NuGet package.
However I don't think (I'm not aware of a method) you can get the initial service configuration once it has been changed.