Access Apache Oozie REST Services over Kerberos using python urllib2 - hadoop

i was planning to improve visualization details for an internal project using information provided by Oozie REST Services.
below is the python script to access oozie rest information
req=urllib2.Request('http://localhost:11000/oozie/v1/jobs?jobtype=wf')
response=urllib2.urlopen(req)
print json.dumps(json.loads(response.read()),indent=4,separators=(',',': '))
it works smoothly on non kerberos protected environment.
when script is run on kerberos protected environment, it results into
File "/usr/lib64/python2.6/urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 401: Unauthorized
could anyone help on this issue.

Related

Apache NiFi Controller Start not working with nipyapi client

I have been using nipyapi client to manage new Apache NiFi deployments and is working great, but i am getting an issue when trying to ENABLE a Controller Services.
My Setup:
I run NiFi in docker and every time a container starts there is a series of steps such as :
Build NiFi server - OK
Download the temapltes.xml - OK
Upload templates to NiFi - OK
Deploy templates to NiFi Canvas - OK
ENABLE Controller Service - ERROR
import nipyapi
nipyapi.config.nifi_config.host = 'http://localhost:9999/nifi-api'
nipyapi.canvas.get_controller('MariaDB', identifier_type='name', bool_response=False)
#Enable Controler
headers = {'Content-Type': 'application/json'}
url = 'http://localhost:9999/nifi-api/flow/process-groups/'+nipyapi.canvas.get_root_pg_id()+'/controller-services'
r = requests.get(url)
reponse = json.loads(r.text)
controllerId = reponse['controllerServices'][0]['id']
nipyapi.canvas.schedule_controller(controllerId, 'True', refresh=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/nipyapi/canvas.py", line 1222, in schedule_controller
assert isinstance(controller, nipyapi.nifi.ControllerServiceEntity)
AssertionError
Not sure what i am missing !
PS - i have been trying nifi-toolkit but is not working as well
./cli.sh nifi pg-enable-services --processGroupId 2b8b54ca-016b-1000-0655-c3ec484fd81d -u http://localhost:9999 --verbose
Sometimes it works sometimes does not work !
I would like to stick with one tool eg: toolkit or nipyapi (faster)
Any Help would be great ! thx
Per the error, NiPyAPI expects to be passed the Controller object, not just the ID.

Jmeter - Plugins behind the proxy

I placed plugin manager in "lib\ext" folder and tried to open it showed error:
java.io.IOException: Repository responded with wrong status code: 407
Jmeter version - 3.3
Plugin version - 0.16
Jmeter is invoked from command line by using the following parameters:
C:\Users\princen\Performance Testing\Software\apache-jmeter-3.3\bin\jmeter.bat -H Proxyserver -P 1234 -u princen -a ***
Parameters modified as suggested here
JVM_ARGS="-Dhttps.proxyHost=Proxyserver -Dhttps.proxyPort=1234 -Dhttp.proxyUser=princen -Dhttp.proxyPass=***" C:\Users\princen\Performance Testing\Software\apache-jmeter-3.3\bin\jmeter.bat
Above try gives the following error message
Windows cannot find "JVM_ARGS="-Dhttps.proxyHost=Proxyserver -Dhttps.proxyPort=1234 -Dhttp.proxyUser=princen -Dhttp.proxyPass=***
When I tried to changes command to the following:
C:\Users\princen\Performance Testing\Software\apache-jmeter-3.3\bin\jmeter.bat -Dhttps.proxyHost=Proxyserver -Dhttps.proxyPort=1234 -Dhttp.proxyUser=princen -Dhttp.proxyPass=***
I received an error:
java.io.IOException: Repository responded with wrong status code: 407
Can someone please correct parameters required to load the plugin manager?
Ensure you use last version of jmeter-plugins download manager.
Regarding your parameters, you're mixing different configurations, just set (for both http and https):
JVM_ARGS="-Dhttps.proxyHost=myproxy.com -Dhttps.proxyPort=8080 -Dhttps.proxyUser=john -Dhttps.proxyPass=password -Dhttp.proxyHost=myproxy.com -Dhttp.proxyPort=8080 -Dhttp.proxyUser=john -Dhttp.proxyPass=password"
Where password is your real password.
None of above methods working for me. Its really tough to work with Java(due to Loadrunner background). I added Ultimate thread alone and its working fine.
Thank you all for your inputs..
JMeter is using the official proxy configuration from Oracle (like here: https://memorynotfound.com/configure-http-proxy-settings-java/)
The problem is that the jmeter documentation is wrong about the password parameter: it should be http.proxyPassword not http.proxyPass.
Also you must use the https. properties for secured urls you want to access using the proxy. And the http. properties for non secured.

Flask- Cannot read or write to file

I am using AWS EC2 to host a Flask application and I am trying to read and write to a text file when the user submits a form using the open() function. When the form is submitted I am getting the error:
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
I am not sure why this error is happening.
The code that does this is:
#app.route("/submit", methods=["POST"])
def submit():
file = open("settingsfile.txt", "w")
file.close()

ADODB.Connection error from Ruby script on Apache server

I have a Ruby script (non-Rails) that connects to a SQL Server database. When run from the command line, it runs fine. When executed via an http request, it generates an error, specifically when opening the DB connection. Something about the combination of the http/SQL methods is failing.
I'm running the script on a machine with: Windows 7 Ultimate (64-bit), Ruby 1.9.3p125, Apache 2.2.11. The database is SQL Server 10.0.4000, hosted on a separate (corporate, internal) server.
The script looks something like this:
#!/Ruby193/bin/ruby
require 'win32ole'
...
$qadb = nil
begin
$qadb = SqlServer.new('192.168.100.249', 'qauser', 'password')
$qadb.open('qadb')
rescue
logRegression("Rescued: Unable to access QADB: #{$!}")
end
The SqlServer class is based on David Mullet's code, found at http://rubyonwindows.blogspot.com/2007/03/ruby-ado-and-sqlserver.html (not copied here for brevity).
From the command line, the DB opens fine and I get an expected result from the script. When I call the script via my internal server (http://qatools/getTask.rb) I get the following error in my log file:
Rescued: Unable to access QADB: failed to create WIN32OLE object from `ADODB.Connection'
HRESULT error code:0x8007007e
The specified module could not be found.
I've considered that I might be missing a DLL. Other research led me to ntwdblib.dll -- I tried downloading a copy and placing it in various folders. I've also considered that I might be facing an Apache configuration issue and/or a security/permissions issue but I haven't found any solutions for those that seem to fit my specific problem.
Any ideas?

HPCC/HDFS Connector

Does anyone know about HPCC/HDFS connector.we are using both HPCC and HADOOP.There is one utility(HPCC/HDFS connector) developed by HPCC which allows HPCC cluster to acess HDFS data
i have installed the connector but when i run the program to acess data from hdfs it gives error as libhdfs.so.0 doesn't exist.
I tried to build libhdfs.so using command
ant compile-libhdfs -Dlibhdfs=1
its giving me error as
target "compile-libhdfs" does not exist in the project "hadoop"
i used one more command
ant compile-c++-libhdfs -Dlibhdfs=1
its giving error as
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
[get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
to /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
BUILD FAILED java.net.ConnectException: Connection timed out
any suggestion will be a great help
Chhaya, you might not need to build libhdfs.so, depending on how you installed hadoop, you might already have it.
Check in HADOOP_LOCATION/c++/Linux-<arch>/lib/libhdfs.so, where HADOOP_LOCATION is your hadoop install location, and arch is the machine’s architecture (i386-32 or amd64-64).
Once you locate the lib, make sure the H2H connector is configured correctly (see page 4 here).
It's just a matter of updating the HADOOP_LOCATION var in the config file:
/opt/HPCCSystems/hdfsconnector.conf
good luck.

Resources