I setup nifi and nifi-registry to different servers and they communicate fine with https and cert authorization and authentication.
Now i face a problem in exactly the same setup for another nifi that need to communicate with the same nifi-registry. The problem is that the new nifi is on a restricted area, with http_proxy. I search many days for a solution for that. I don't find anything in the documentation about that.
At nifi in the controller settings/registry-clients, is there any way that i can inform nifi that the communication will be through http_proxy and not straight?
Nothing on the documentation talks about that. Maybe people face it with another way? Or simple is not possible?
The version of nifi and nifi-registry are 1.15.3.
I think I would probably need a clearer understanding of where the proxy is, but this page describes proxy configuration in front of NiFi and what fields that proxy would need to set to sit in front of NiFi: https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.5.2/nifi-configuration-best-practices/content/proxy_configuration.html
and for NiFi registry:
https://nifi.apache.org/docs/nifi-registry-docs/html/administration-guide.html#proxy_configuration
Related
I have an interesting problem, maybe you could help me out.
There are given two spring applications, called app1 and app2. There is plenty of REST calls are happening to both of the services. I need to implement a security solution where both of them can communicate with each other on REST but it is protected by mutual TLS (mTLS where both app has its own cert for each other)
Implementing it the standard way its not that hard, Spring has solutions for it (with keystores and etc.), but the twist is, I have to create it in a Kubernetes environment.
The two app is not in the same cluster, so app1 is in our cluster but app2 deployed in one of our partner's system.
I am pretty new to k8s and not sure what is the best method to achieve this. Should I store the certs or the keystore(s) as secrets? Use and configure nginx ingress somehow, maybe Istio would be useful? I would really want to find the optimal solution but I don't know the right way.
I would really like if I could configure it outside my app and let k8s take care about it but I am not sure if it is the right thing to do.
Any help would be really appreciated, some guidance to find the right path or some kind of real life examples.
Thank you for your help!
Mikolaj has probably covered everything but still let me add my cent
i don't have much experience working with Istio, however i would also suggest checking out the Linkerd service mesh.
Step 1.
Considering if you are on multi could GKE & EKS or so still it will work.
Multicluster guide details and installation details
Linkerd will use the Trust anchor between the cluster so traffic can flow encrypted and not get open to the public internet.
You have to generate the certificate which will form a common base of trust between clusters.
Each proxy will get copy of the certificate and use it for validation.
The answer to your problem will be more complex as there is no one-size-fits-all solution that turns out to be the best. It all depends on what exactly you want to do and what tools you have for it. suren mentioned it very well in the comment:
if you are still in the stage of PoC, then note that there are couple of ways of achieving what you want. Istio would be a valid way, for example. You could have the other service in a ServiceEntry, enable mTLS and there you go. You don't have to even manage secrets for this specific scenario, as it is automatic. But there are other ways. Even with Istio there are other ways. If you are on any cloud provider, you might have some managed services as well
This is a very good comment and I would also recommend an istio based solution to you. First of all check the official mTLS documentation for istio first. You will also find specific usage examples and sample configuration files there.
You also mentioned in the question that your application will run between two clusters. Take a look at this tutorial, which shows exactly how to solve this situation:
Istio injects an envoy sidecar to every pod and makes sure all the traffic goes through the envoy proxy. Envoy proxies compose the data plane. The control plane manages the Envoy sidecars. In previous versions of Istio, the control plane used to have other components, such as Pilot, Citadel, and Galley. These components got consolidated into a single binary called “istiod”. The control plane also deals with the configurations, certificates, secrets, and health checking.
For more information look also at related problem on stackoverflow and another tutorial.
Take into account that in addition to istio itself, you will be able to use ready-made cloud solutions, for example available at GKE i.e. Configuring TLS and mTLS on the Istio ingress .
Another way might be to use a tool Anthos Service Mesh by example: mTLS.
I understand that the official documentation recommends using NiFi with HTTPS, but it nonetheless contains a word for using NiFi under HTTP, like the nifi.web.http.port property.
Also, I'd like to incrementally incorporate and evolve the NiFi instance into our's current data infrastructure, starting with non-critical data pipelines. So, the TLS layer right now is not necessary and could add friction during the deployment phase. So, I decide to go on the HTTP path.
After changing some settings, I am able to access NiFi's GUI at http://localhost:8080/nifi but I find out that I cannot make any change to the Flow. Write operations, i.e POST / PUT / DELETE requests, are rejected by HTTP 403.
NiFi doc says:
And by monitoring the API traffic between the GUI and NiFi instance, I can confirm that the PermissionsEntity has both canRead:true and canWrite:true.
I used a containerized NiFi instance.
Has anyone also encounter similar problems?
The root canvas may have been set for the default single-user that NiFi 1.14 generates if it starts up without security configuration.
First thing to try is right-clicking on the canvas and granting yourself access if you can.
The second option: try (re)moving the flow.xml.gz, users.xml and authorizations.xml and then restarting Nifi. New files will be generated that may work better with anonymous access.
Either way, setting up security now will probably mean less friction down the road, not more. I strongly advise you to bite the bullet and get it set up securely.
We are using NiFi to collect files from Azure Blob Storage and are sitting behind a forward HTTP Proxy. Our organization is tightening the security and now all traffic through the proxy needs to be authenticated. The ListAzureBlobStorage processor does not support it. Is there a way around it?
#Peter Zandbergen You did not share your NiFI Version, but in my local NiFI (1.9) there is Proxy Configuration Service available in the ListAzureBlobStorage Processor. I would recommend 1.9+ to get the best bug fixes and processors.
If this version is not possible, it is possible to put newer version NARs into older version of NiFi. This poses additional issues, but is a viable solution for some scenarios.
I'm trying to load-balance "2 Web Servers (running Apache/PHP)" by putting Nginx at in front of them. But I need to use Round Robin algorithm but when i do this, I can't manage to have the stable SESSIONS.
(I understand; if I use Round Robin, the SESSION information will be lost once i hit to the another Server on next load)
Is there a proper way to achieve this? Any kind advice for the industrial standards on this please?
FYI, I have already put these 2 Web Servers into GlusterFS as in Cluster. So I have a common storage (if you are going to suggest something based on this)
The nginx manual says that session affinity is in the commercial distribution only ("sticky" directive). If you don't use the commercial distribution, you'll have to grab a third-party "plugin" and rebuild the server with support
("sticky" should help you find the third party addons)
If there isn't any specific reason for using Round Robin, you can try to use ip_hash load balancing mechanism.
upstream myapp1 {
ip_hash;
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
If there is the need to tie a client to a particular application server — in other words, make the client’s session “sticky” or “persistent” in terms of always trying to select a particular server — the ip-hash load balancing mechanism can be used.
Please refer to nginx doc for load_balancing for more information.
I have a Elasticsearch running on my server by default it runs on port 9200 and link is public means any one can insert, update, delete anything form anywhere. How do I make it secure like phpMyadmin which can be only accessed with the help of my code and not directly from browser or postman.
Elasticsearch does not perform authentication or authorization, leaving that as an exercise for the developer. Two popular ways I have seen are
Setup your own proxy (Nginx/HAProxy) fronting elasticsearch - this way you exercise full control. You can also use the Elasticsearch-jetty plugin to have jetty level auth
Shield - If budget permits use Shield which is a paid offering from Elasticsearch - https://www.elastic.co/products/shield
Even with these in place, depending on who you are exposing this to - you may want to disable certain things like dynamic scripting, throttles for DoS etc.
You can use the Elasticsearch basic authentication plugin - https://github.com/Asquera/elasticsearch-http-basic
The README there gives a good idea on how to set it up.
If you are using Kibana3 as a frontend to elasticsearch, you can secure it using https://github.com/fangli/kibana-authentication-proxy
I have enabled a relatively simple Nginx proxy that sits between my Elasticsearch and Kibana to configure authorized access to my dashboards and charts.
Look at my post here: https://udaysagars.wordpress.com/2016/04/04/how-i-configured-authorized-access-to-kibana-dashboards/
Also, you can view my application that uses this method here: http://udaysagar2177.github.io/ec2/twitter-analytics.html