I'm trying to obtain access to nifi flow project, through nipyapi and LDAP.
I have nify flow and registry up and running, and login/password('login'/'password')
import nipyapi
nipyapi.config.nifi_config.host = 'https://nifiexample.com/nifi'
nipyapi.config.registry_config.host = 'https://nifiexample.com/nifi-registry'
print(nipyapi.canvas.get_root_pg_id())
I read docs and found this method
nipyapi.security.set_service_ssl_context(service='nifi', ca_file=None, client_cert_file=None, client_key_file=None, client_key_password=None)
but as far as I'm not a developer I don't understand how to use it properly.
Can please someone tell me, what else configs/proprieties should I add to run this simple script?
I would recommend using the Secured Connection Demo from the docs. The Python code goes through this process step-by-step.
Understanding how NiFi uses TLS and performs authentication and authorization will also help these steps make sense.
Related
So im building app based on Express and using Prisma ORM. What i need is to SSH to a server, open up express.js console and create new db entry using prisma. Something similar to python manage.py shell for Django or rails console for Rails. Is there a solution for this of any kind?
Like I pointed in the comment there is a way ( kind of ) to get access to a running express instance. If that's all you need follow:
How can I open a console to interact with Express app?
Express doesn't exactly have a feature like rails console which is a framework feature in that case.
That said, I question the long term implication of this approach. If you really just need to seed some data, write an "init" script, and call it after you ssh into a server or using some CI/CD approach. This is more re-usable, since you can even pass a json file to the script to load dynamic data.
Also, Prismajs has an official way to seed the data ( if that's what you need) that you can leverage:
https://www.prisma.io/docs/guides/database/seed-database
UPDATE:
If you are able to run to code on your machine and point the remote database, then you can use node --inspect to debug in a chrome console. Which should give you about the same effect as a rails REPL
https://medium.com/#tbernardes/debugging-nodejs-with-chrome-inspector-devtools-1cd2ef323b5e
I am using both H2O and Sparkling Water on Amazon Clusters. I have been using Qubole and have been able to access the Flow UI on that platform. I am currently testing Databricks and Sagemaker, but I am unable to access the Flow UI using either platform (using port 54321). I am using H2O_cluster_version: 3.32.1.3. Do I need to use another port?
Getting the right Flow URL can be tricky because of the changes in the base URL at DBC. There were some improvements in more recent releases of SW that give the proper URL within Databricks, so make sure you try the latest version.
You should get it from your print/output, when you create an H2OContext. The port would be 9009. If you want to change it, you can use spark.ext.h2o.client.web.port.
You can also find the link in "Spark UI" -> "Sparkling Water" tab
The format would be something like: https://your-dbc-domain/driver-proxy/o/xxxxxxxx/yyyyyyy/9009/flow/index.html
From the docs for reference:
Flow is accessible via the URL printed out after H2OContext is
started. Internally we use open port 9009. If you have an environment
where a different port is open on your Azure Databricks cluster, you
can configure it via spark.ext.h2o.client.web.port or corresponding
setter on H2OConf.
I've worked the whole day on a Nifi Flow in a local docker container. Once finished I've downloaded the flow as json file and killed the container. I now want it to import into my Nifi instance on Kubernetes. Unfortunately, it seems that the way to go is using templates. So I guess the download flow as JSON file function is a one way road? Or what is the purpose of this functionality?
Is there a ways to convert this JSON to a template.xml? Otherwise I have to redo all my work.
You can upload the flow definition when creating the Process Group. Use the "Browse" icon:
You need NiFi Registry to import:
Good resource: https://community.cloudera.com/t5/Community-Articles/How-to-import-a-flow-to-NiFi-registry-in-CDP-Cloud/ta-p/308335
Personally, I do not like posts by Timothy Spann, it may be useful but lack "lots" of explanations.
Summary:
Install NiFi Registry
Connect NiFi with the registry
Import Json file manually or using NiFi Toolkit or NiPyAPI to do it programmatically.
In a secured Hadoop cluster I am trying to access Flink AM page and logs from YARN and seeing the following error:
User %remoteUser are not authorized to view application %appID
Seems like that the cause is lack of support of ACL in YARN from Flink side.
How the code works
The message comes from hadoop/yarn/server/AppBlock class which uses ApplicationACLsManager class. This class performs checks and refers to app info which was set in RMAppManager:
this.applicationACLsManager.addApplication(applicationId,
submissionContext.getAMContainerSpec().getApplicationACLs()
AMContainerSpec is ContainerLaunchContext class which has a PB implementation, submitted from the framework side.
From Flink, this object is created in AbstractYarnClusterDescriptor class which (and other classes in Flink) doesn't call setApplicationACLs.
Question
Is there a way to bypass this or the right solution is to contribute the support to Flink? What is the state of this feature from the Flink side?
This sounds like a limitation in Flink which we should fix. Please open a JIRA issue. The community would be very happy if you could help implementing it.
i am a beginner in elasticsearch and hadoop. i am having a problem with moving data from hdfs into elasticsearch server using es.net.proxy.http.host with credentials. Server secured with credentials using nginx proxy configuration. But when i am trying to move data using pig script it shows null pointer exception.
My pig script is
REGISTER elasticsearch-hadoop-1.3.0.M3/dist/elasticsearch-hadoop-1.3.0.M3.jar
A = load 'date' using PigStorage() as (date:datetime);
store A into 'doc/id' using org.elasticsearch.hadoop.pig.EsStorage('es.net.proxy.http.host=ipaddress','es.net.proxy.http.port=portnumber','es.net.proxy.http.user=username','es.net.proxy.http.pass=password');
I don't understand where is the problem with my script. Can anyone please help me?
Thanks in Advance.
I faced this type of problem.elasticsearch-hadoop-1.3.0.M3.jar should not support the Proxy and authentication setting.You will try to elasticsearch-hadoop-1.3.0.BUILDSHAPSHOT.jar file. But I couldn't move object data like Totuble to Production server with authentication
Thank U