How can teams located in different locations can work on the same NiFi dataflow at the same or different time?And how do we move each "only changes" in Dataflow to QA & UAT environments ?
Generally the approach is to organize the top-level canvas into process groups for each team that will be working on something. Then you can apply security policies to ensure that each group can only modify their respective process groups.
This post shows an example of how to secure an instance and setup policies for processor groups: https://bryanbende.com/development/2016/08/17/apache-nifi-1-0-0-authorization-and-multi-tenancy
Deploying flows between environments is an area currently being worked on by the community.
There is a feature proposal here that describes some of the planned capabilities: https://cwiki.apache.org/confluence/display/NIFI/Configuration+Management+of+Flows
There is a sub-project of NiFi called the registry which is where the work is being done:
https://nifi.apache.org/registry.html
Currently your options would be to export a template of a process group and import it into another environment. You could script a lot of this by using the REST API. Anything you can do from the UI can be done through the REST API which is easy to see by opening something like Chrome Dev tools and watching the requests being made while using the UI.
Related
I have started using sentry within my org and loving it so far.
I've been trying to use its performance monitoring tool with custom metrics added.
While I can add custom metrics to the transactions I'm generating in sentry_sdk (for Python), I can't get access to them on the dashboard of our self-hosted installation of sentry.
After a lot of digging, I came across this paragraph here which states that
This feature is only available to organization on our latest plans which include Dynamic Sampling. Customers on legacy plans must move to one of these plans in order to access custom metrics.
From what I gather, I believe their plans in general is to run sentry on their servers. Unless you opt-in to their self-hosted code that can be downloaded from github here.
This is absolutely a bummer because I know my org will not consider moving internal data to third-party servers.
Wondering if someone knows of a solution to this problem. If sentry folks know of (paid) options that enables this feature on self-hosted version or if someone has hacked into their open source code?
I'd also love to hear any out-of-the-box suggestion you folks might have.
In Google's latest docs, they say to test Go 1.12+ apps locally, one should just go build.
However, this doesn't take into account all the routing etc that would happen in the app engine utilizing the app.yaml config file.
I see that the dev_appserver.py is still included in the sdk. But it doesn't seem to work in Windows 10.
How does one test their Go App Engine App locally with the app.yaml. ie: as an actual emulated app engine app.
Thank you!
On one hand, if your application consists of just the default service I would recommend to follow #cerise-limón comment suggestion. In general, it is recommended for the routing logic of the application to be handled within the code. Although I'm not a Go programmer, for single service applications that use static_files and static_dir there shouldn't be any problems when testing the application locally. You might also deploy the new version without promoting traffic to it in order to test it as explained here.
On the other hand, if your application is distributed across multiple services and the routing is managed through the dispatch.yaml configuration file you might follow two approaches:
Test each service locally one by one. This could be the way to go if each service has a single responsibility/functionality that could be tested in isolation from the other services. In fact, with this kind of architecture the testing procedure would be more or less the same as for single service applications.
Run all services locally at once and build your own routing layer. This option would allow to test applications where services needs to reach one another in order to fulfill the requests made to them.
Another approach that is widely used is to have a separate project for development purposes where you could just deploy the application and observe it's behavior in the App Engine environment. As for applications with highly coupled services it would be the easiest option. But it largely depends on your budget.
Say I have a front end node and three backed nodes tools, blog, and store. Each node communicates with the other. Each of these nodes have their own set of languages and libraries, and have their own Dockerfile.
I understand the DevOps lifecycle of a single monolithic web application, but cannot workout how a DevOps pipeline would work for microservices.
Would each micro-service get its own github repo and CI/CD pipeline?
How do I keep the versions in sync? Let's say the tools microservice uses blog version 2.3. But blog just got pushed to version 2.4, which is incompatible with tools. How do I keep the staging and production environments in sync onto which version they are supposed to rely on?
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest location of this service?
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices for developing locally with several different services?
Where can I go to learn more?
Would each micro-service get its own github repo and CI/CD pipeline?
From my experience you can do both. I saw some teams putting multiple micro-services in one Repository.
We where putting each micro-service in a separate repository as the Jenkins pipeline was build in a generic
way to build them that way. This included having some configuration files in specific directories like
"/Scripts/microserviceConf.json"
This was helping us in some cases. In general you should also consider the Cost as GitHub has a pricing model
which does take into account how many private repositories you have.
How do I keep the versions in sync? Let's say the tools micro-service uses blog version 2.3. But blog just got pushed to version 2.4, which
is incompatible with tools. How do I keep the staging and production
environments in sync onto which version they are supposed to rely on?
You need to be backwards compatible. Means if your blogs 2.4 version is not compatible with tools version 2.3 you will have high dependency
and coupling which is going again one of the key benefits of micro-services. There are many ways how you get around this.
You can introduce a versioning system to your micro-services. If you have a braking change to lets say an api you need to support
the old version for some time still and create a new v2 of the new api. Like POST "blogs/api/blog" would then have a new api
POST "blogs/api/v2/blog" which would have the new features and tools micro-service will have some brige time in which you support
bot api's so it can migrate to v2.
Also take a look at Semantic versioning here.
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest
location of this service?
I am not quite sure what you mean here. But this goes in the direction of micro-service orchestration. Usually your Cloud provider specific
service has tools to deal with this. You can take a look at AWS ECS and/or AWS EKS Kubernetes service and how they do it.
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices
for developing locally with several different services?
I would suggest to use docker and docker-compose to create your development setup. You would create a local development network of docker
containers which would represent your whole system. This would include: your micro-services, infrastructure(database, cache, helpers) and others. You can read about it more in this answer here. It is described in the section "Considering the Development Setup".
Where can I go to learn more?
There are multiple sources for learning this. Some are:
https://microservices.io/
https://www.datamation.com/applications/devops-and-microservices.html
https://www.mindtree.com/blog/look-devops-microservices
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/multi-container-applications-docker-compose
Is there any way through which I can exchange data between two organization.
I want to do my coding in Plugin only. Can we write a code in plugin by which it accesses/manipulates the data of a different org through web services only and not directly hitting its database.
In know the orgs are different worker groups. Just wanted to know if its possible or if there is any other technique.
Thanks in advance.
The data for each CRM organisation is exposed via web services which differ slightly for CRM 2011 and CRM 4. The best thing to do is download the latest version of the SDK for the target platform as there are several examples in there for plugins and service based operations.
From your plugin you will be able to access the other organisation via this service and a connection to the service for the "local organisation in which the plugin is running will be available from the IExecutionContext parameter passed to your plugin. Any operations you carry out across both orgs will not be transactional though.
Also be sure to take a look at the sync and async options available for the plugins. If their use is appropriate for your scenario consider using an async plugin for the updates to the target org to minimise their effect on the source org.
Plug-ins will work. Hitting the database directly is actually not a supported model anyways. You can also think of using BizTalk as the middleware.
We are looking at a standard way of configuring the various "endpoints" of our application. Our application is a distributed system with Windows Desktop applications, Windows Server "services" and databases.
We currently configure each piece using XML files. This is getting a little out of hands as we work with larger customers who can have dozens of Servers running our application and hundreds of desktop clients.
Can anyone recommend a Microsoft technology or a third party that would allow us to centralize all that configuration information and manage it in a one place for all our applications? Any changes would be "pushed" to the endpoint(s) that are interested.
For example, if we were to change the login for one of our database, we would make that change on the database, then reflect that change in our centralized system. Following that last step, any service that needs to connect to the database would be notified of the change (and potentially receive the new data). How and what each endpoint does with that information is outside the scope of the system.
Our primary business is not "Centralized Configuration Services". We are a GIS company that provides solutions for various utilities worldwide.
I've done a couple of things to give myself this functionality over the years. I build enterprise applicatons that may be distributed across many servers. I don't want to bury config settings in each services config file or each web server's web.config file. For application specific stuff I usually create an application settings table in the app's database. The table only has two fields. SettingName and SettingValue. I then write a web or wcf service whose sole function it is to retrieve these settings. I write a function called GetSetting where you pass "SettingName" and it returns SettingValue or an empty string if your setting is not found. This way I can store all application settings for all components of the application in one spot. Maintenance and troubleshooting for this is really easy, I'm not hunting through scads of config files spread across a dozen web and app servers.
For larger scale apps I might create a separate AppSettings database where I add a new field to my table mentioned above. ApplicationName. My web or wcf service for this approach has the same method call (GetSetting) only at this scope I pass ApplicationName and SettingName and it returns SettingValue or an empty string.
Doing either of these things allows you to centralize all app settings for any size application or IT shop. It has worked really well for us.
You could use RSS together with BitTorrent to distribute changes. See Wikipedia. It is not MS specific however, but should provide the flexibility you need - a configuration server holding the configuration and providing the feeds needed to configure the clients and possibly servers.
Any VCS through a secure channel?
For example, git through ssh (both available in cygwin).
I think the first step is to have the secure channel (if you want the push ability, pulling might be different).
As for managing the "versions" in different "branches", what's better than a version control system?
As it goes for the Microsoft requirement, well the Microsoft sofwares in that exists in that area would suck pretty bad in your case (as in not the best tool for the job).