Apache NiFi Controller Start not working with nipyapi client - apache-nifi

I have been using nipyapi client to manage new Apache NiFi deployments and is working great, but i am getting an issue when trying to ENABLE a Controller Services.
My Setup:
I run NiFi in docker and every time a container starts there is a series of steps such as :
Build NiFi server - OK
Download the temapltes.xml - OK
Upload templates to NiFi - OK
Deploy templates to NiFi Canvas - OK
ENABLE Controller Service - ERROR
import nipyapi
nipyapi.config.nifi_config.host = 'http://localhost:9999/nifi-api'
nipyapi.canvas.get_controller('MariaDB', identifier_type='name', bool_response=False)
#Enable Controler
headers = {'Content-Type': 'application/json'}
url = 'http://localhost:9999/nifi-api/flow/process-groups/'+nipyapi.canvas.get_root_pg_id()+'/controller-services'
r = requests.get(url)
reponse = json.loads(r.text)
controllerId = reponse['controllerServices'][0]['id']
nipyapi.canvas.schedule_controller(controllerId, 'True', refresh=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/nipyapi/canvas.py", line 1222, in schedule_controller
assert isinstance(controller, nipyapi.nifi.ControllerServiceEntity)
AssertionError
Not sure what i am missing !
PS - i have been trying nifi-toolkit but is not working as well
./cli.sh nifi pg-enable-services --processGroupId 2b8b54ca-016b-1000-0655-c3ec484fd81d -u http://localhost:9999 --verbose
Sometimes it works sometimes does not work !
I would like to stick with one tool eg: toolkit or nipyapi (faster)
Any Help would be great ! thx

Per the error, NiPyAPI expects to be passed the Controller object, not just the ID.

Related

Dialogflow CX - Location settings have to be initialized - FAILED_PRECONDITION

I am automating Dialogflow CX using Python client libraries. That includes agent/intent/entity etc. creation/updation/deletion.
But for the first time run, I am encountering the below error from python.
If I login to console and set the location from there and rerun the code, it is working fine. I am able to create agent.
Followed this URL of GCP -
https://cloud.google.com/dialogflow/cx/docs/concept/region
I am looking for code to automate the region & location setting before running the python code. Kindly provide me with the code.
Below is the code I am using to create agent.
Error -
google.api_core.exceptions.FailedPrecondition: 400 com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION"
debug_error_string = "{"created":"#1622183899.891000000","description":"Error received from peer ipv4:142.250.195.170:443","file":"src/core/lib/surface/call.cc","file_line":1068,"grpc_message":"com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION","grpc_status":9}"
main.py -
# Import Libraries
import google.auth
import google.auth.transport.requests
from google.cloud import dialogflowcx as df
from google.protobuf.field_mask_pb2 import FieldMask
import os, time
import pandas as pd
# Function - Authentication
def gcp_auth():
cred, project = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
auth_req = google.auth.transport.requests.Request()
cred.refresh(auth_req)
# Function - Create Agent
def create_agent(agent_name, agent_description, language_code, location_id, location_path):
if location_id == "global":
agentsClient = df.AgentsClient()
else:
agentsClient = df.AgentsClient(client_options={"api_endpoint": f"{location_id}-dialogflow.googleapis.com:443"})
agent = df.Agent(display_name=agent_name, description=agent_description, default_language_code=language_code, time_zone=time_zone, enable_stackdriver_logging=True)
createAgentRequest = df.CreateAgentRequest(agent=agent, parent=location_path)
agent = agentsClient.create_agent(request=createAgentRequest)
return agent```
Currently, Dialogflow does not support configuring the location settings through the API, thus you can not initialise location settings through it. You can only set the location through the Console.
As an alternative, since the location setting has to be initialised only once for each region per project you could set the location and automate the agent creation process, some useful links: 1 and 2.
On the other hand, if you would find this feature useful, you can file a Feature Request, here. It will be evaluated by the Google's product team.
Many thanks Alexandre Moraes. I have raised a feature request for the same.

Can't establish connection to Web Server using rosbridge

I have created a simple HTML page to control the movement of a simulated Gazebo Turtlebot using roslaunch rosbridge_server rosbridge_websocket.launch following this tutorial.
However, in the Web Console of the HTML page (F12) it shows the error "Firefox cant establish a connection to the server at ws://localhost:9090/." I am using the default rosbridge for the websocket(9090). In the Terminal I am also receiving the errors:
[-] failing WebSocket opening handshake ('WebSocket connection denied: origin 'null' not allowed')
[-] dropping connection to peer tcp4:127.0.0.1:41290 with abort=False: WebSocket connection denied: origin 'null' not allowed.
Does anyone have any suggestions on how I can fix this?
Given that you have followed the ROS tutorial and have created an HTML file as shown in Ros Bridge tutorial then you have to run:
runcore
rosrun rospy_tutorials add_two_ints_server
roslaunch rosbridge_server rosbridge_websocket.launch
Now that you have these up and running, you need to serve the html/javascript file (e.g. simple.html) and start the services etc. For example, you can serve the simple.html by using a SimpleHTTPServer, see below an example (e.g. simplehttpserver_test.py):
#!/usr/bin/env python
import SimpleHTTPServer
import SocketServer
class MyRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_GET(self):
if self.path == '/':
self.path = '/simple.html'
return SimpleHTTPServer.SimpleHTTPRequestHandler.do_GET(self)
Handler = MyRequestHandler
server = SocketServer.TCPServer(('127.0.0.1', 9089), Handler)
server.serve_forever()
Once you run the simplehttpserver_test.py you can open the browser on 127.0.0.1:9089 and you should be able to have it working.
Note that SimpleHTTPServer serves files from the current directory and below, directly mapping the directory structure to HTTP requests, which means that the simple.html should be in the same (or below) directory as the simplehttpserver_test.py. Last, the port for the simplehttpserver_test.py should differ from the one used for the Rosbridge WebSocket server (e.g. default is 9090).

Weird "is_xhr" error when deploying Flask app to Heroku

I have a flask app which I've deployed to Heroku, one of the routes is the following
def get_kws():
seed_kw = request.json['firstParam']
audience_max = request.json['secondParam']
interest_mining_service = InterestMiningService(seed_kw, audience_max)
query_result = interest_mining_service.query_keyword().tolist()
if seed_kw in query_result:
print ("yes")
return jsonify(
{
'keyword_data' : interest_mining_service.find_kws().to_json(orient='records'),
'query_results': query_result
}
)
When I test this endpoint locally, I have no issues when sending POST and GET requests to that endpoint. However, when I deploy to Heroku, I get the following error:
File "/app/server/controller.py", line 24, in get_kws
2020-02-08T22:31:05.893850+00:00 app[web.1]: 'query_results': query_result
2020-02-08T22:31:05.893850+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/json.py", line 298, in jsonify
2020-02-08T22:31:05.893851+00:00 app[web.1]: if current_app.config['JSONIFY_PRETTYPRINT_REGULAR'] and not request.is_xhr:
2020-02-08T22:31:05.893851+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/werkzeug/local.py", line 347, in __getattr__
2020-02-08T22:31:05.893852+00:00 app[web.1]: return getattr(self._get_current_object(), name)
2020-02-08T22:31:05.893858+00:00 app[web.1]: AttributeError: 'Request' object has no attribute 'is_xhr'
I've never seen this error Request object has no attribute 'is_xhr' before and it only seems to be happening when I deploy to Heroku. Any guidance on what I should look into?
There also doesn't seem to be an issue with the json key keyword_data - the issue seems limited to query_results which is a list.
The Werkzeug library (dependency from Flask) recently received a major update (0.16.1 --> 1.0.0) and it looks like Flask (<=0.12.4) does not restrict the version of Werkzeug that is fetched.
You have 2 options:
Stick with your current version of Flask and restrict the Werkzeug version that is fetched explicitly in your application's setup.py or requirements.txt by specifying werkzeug<1.0 or werkzeug==0.16.1
Upgrade to a recent version of Flask (>=1.0.0), which is running fine with latest Werkzeug
Or you can just forcefully install the bustard again by calling
pip install Werkzeug==0.16.1
I have faced with this problem too.
Just temporarily fixed by directly checking in request header
request.headers.get("X-Requested-With") == "XMLHttpRequest"
not sure this help ...

Access Apache Oozie REST Services over Kerberos using python urllib2

i was planning to improve visualization details for an internal project using information provided by Oozie REST Services.
below is the python script to access oozie rest information
req=urllib2.Request('http://localhost:11000/oozie/v1/jobs?jobtype=wf')
response=urllib2.urlopen(req)
print json.dumps(json.loads(response.read()),indent=4,separators=(',',': '))
it works smoothly on non kerberos protected environment.
when script is run on kerberos protected environment, it results into
File "/usr/lib64/python2.6/urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 401: Unauthorized
could anyone help on this issue.

nginx + python app -- how to enable error logging/stack trace

I have a Flask app running on nginx + uWSGI.
On my local server (non-nginx), I get a nice stack trace + error reporting for exceptions.
Like this:
$ python run.py
Traceback (most recent call last):
File "run.py", line 1, in <module>
from myappname import app
File "/home/me/myappname/myappname/__init__.py", line 27, in <module>
file_handler.setLevel(logging.debug)
File "/usr/lib/python2.7/logging/__init__.py", line 710, in setLevel
self.level = _checkLevel(level)
File "/usr/lib/python2.7/logging/__init__.py", line 190, in _checkLevel
raise TypeError("Level not an integer or a valid string: %r" % level)
On nginx, there is next to no logging whatsoever (in /var/log/nginx/error.log).
This post suggests adding app.logger.exception('Failed') to my script, which didn't help.
How do I enable this sort of logging for debugging purposes?
Nginx will capture your app's console output, but you must make the app recover from exceptions. Else, you'll only get 500 or 400 errors from Nginx.
Try running the app off Nginx until it seems stable.
Use the logging module to capture app status information to your own log file. This strategy will be useful in the long run.

Resources