So, I have a Hive server (Cloudera, Thrift via HTTP) set up and working, and can connect to it from Tableau using the ODBC driver for Cloudera Hive - all good, from the servers in the AWS farm.
However, no luck from the client site/their end-user PCs.
The reason for this is that they require all outbound traffic to the internet (here, my AWS instance) to go through proxies using NTLM, and I can't get the Cloudera ODBC driver to talk via the NTLM proxy. It appears to ignore the Windows proxy settings entirely, in fact.
I'm aware of two (obvious) solutions - use Fiddler/cntlm locally on the box as a reverse proxy / set up a reverse-proxy in the customer's net and point ODBC at that - both of these are somewhat unpalatable to the users.
So: Is there a way to get Cloudera's ODBC driver (or Windows itself) to forcibly go via an NTLM proxy without requiring additional software/servers? Or is there a Cloudera-Hive-compatible Tableau connector that works well with proxies in the middle?
TL;DR: Need to get from Tableau client on Windows to Cloudera Hive in AWS across an NTLM proxy. Thoughts?
The Cloudera Hive ODBC driver currently doesn't support proxy and NTLM authentication. If this feature is important for you I would suggest raising it as a feature request against Cloudera. I am not aware of any other Hive ODBC driver that supports proxy and NTLM.
Holman
Related
I have installed Hyperion EPM 11.1.2.4 in CentOS 7 i.e Foundation Services, Essbase and Financial Reporting. The database I have used is SQL Server.
CentOS is not the officially supported OS for Oracle HTTP server and hence I went ahead with Weblogic HTTP server.
Once I start the weblogic server and start all the EPM services, and when I login into the weblogic server administration console, I find my foundation services server in Admin state
Also when I login into my Oracle Fusion middleware, I can see all my foundation services and financial reporting Servers down.
And hence I am unable to access the servers like workspace, calculation manager.
But few of my servers are up and are in running state such as APS, CALC, EAS, EPMAWEBTIER. But I only can access Essbase.
I request you to check the images attached.
Oracle Weblogic Administration Console
Oracle Fusion Middleware
How can I access these servers?
Am I having these much of troubles just because I used weblogic HTTP server instead of Oracle HTTP server?
if you are installing EPM for learning purposes, I suggest you to take the easy way and use a supported OS. Installing this software into a non supported OS will give you additional problems that you will never be sure if are caused because the non-supported OS, or because your installation / tuning is wrong.
If you download and install a Windows Server VM, it would not expire, just show you the active license message, but it will be full working.
For learning purposes / temporal Virtual machines, it is way to go.
Thanks.
I am using the IBM product: Websphere Application Server (WAS), version: Base 9.0.5.2.
I want to connect remotely to my IBM WAS to collect a particular set of data metrics, and to achieve that I followed the steps mentioned here I cannot use MBean, as it is not supported by IBM and it is only for testing purposes, so all I am left is with option 2 (in the above link).
In the sample test script attached in the above link, all the files that are mentioned, they are the files present on my IBM WAS. Those files aren't present on my remote machine (from where I am trying to connect to my IBM WAS).
I placed those listed files on my remote machine, and still couldn't connect to my IBM WAS.
How shall I test whether I can connect remotely to my IBM WAS or not?
Can somebody please guide me if I'm missing out on any steps?
Verify if your Websphere JMX port is open on both servers ( in the link 2809 )
If you want to access stats provided by PMI infrastructure, then I would consider using PerfServlet app which is discussed here - Retrieving performance data with PerfServlet. It gives you access via http, so heavy client and product libraries needed, and returns XML, which you can parse to get stats you need.
Other option would be to write your custom app which would use JMX Using the JMX interface to develop your own monitoring application and make it available for example as REST service.
Or if you just want to monitor values use dedicated monitoring apps, like IBM Health Center or third party tools.
I can not connect to Google Services from client application if it is trying to communicate with oauth2.googleapis.com (which is probably blocked in my corporate network - I dont know how to test it for sure).
I tried BigQuery with JDBC driver in Dbeaver. With basic settings.
User-based login does this:
It generates link for OAUTH. I open the browser and login with the right google account. Then I insert generated code into the Dbeaver and I recieve that AUTH has failed.
Service-based login does this:
It does not want me to visit any webpage. It just tells me:
[Simba][BigQueryJDBCDriver](100004) HttpTransport IO error : oauth2.googleapis.com.
I also tried to use ODBC, where PROXY can be filled in. But no luck.
When I take a look into 'Proxy Options' the proxy port is always rewritten by proxy host. Weird.
This is what happens when i click on 'catalog' or 'dataset' drop-down field. I cant do any further steps.
BUT!
When I set my HTTP PROXY in GCLOUD CLI APP then communication works. And I can call BQ from it.
Does it mean that GCLOUD communicates through HTTP Proxy and DBeaver or ODBC does not? Or does it mean that GCLOUD does not need oauth2.googleapis.com but ODBC and JDBC do and it is blacklisted? I am confused.
We need to migrate from our internal environment to GCP. We would love to use various applications. I would ask for whitelisting oauth2.googleapis.com but i am not sure this is the only problem as GCLOUD app works without any flaws.
I am not-experienced with networking so i am more than happy to update / correct this question or add any info (if you need) to help me understand this issue. Thank you
According to your description, your corporate network is using a Proxy to reach out Internet, this is the reason why gcloud is capable to reach out BigQuery service when Proxy settings are configured in your system; through Cloud SDK Proxy settings or HTTP PROXY environment variable.
You require to setup the proxy settings within the JDBC connection string as described in Simba JDBC driver documentation, e.g.:
jdbc:bigquery:DataSetId=MyDataSetId;ProjectId=MyProjectId;OAuthType=1;ProxyHost=MyProxyHost;ProxyPort=MyProxyPort;ProxyUID=MyProxyUsername;ProxyPWD=MyProxyPassword
This connection string will indicate the Proxy settings to Simba JDBC driver.
I have setup a hive environment with Kerberos security enabled on a Linux server (Red Hat). And I need to connect from a remote windows machine to hive using JDBC.
So, I have hiveserver2 running in the linux machine, and I have done "kinit".
Now I try to connect from a java program on the windows side with a test program like this,
Class.forName("org.apache.hive.jdbc.HiveDriver");
String url = "jdbc:hive2://<host>:10000/default;principal=hive/_HOST#<YOUR-REALM.COM>"
Connection con = DriverManager.getConnection(url);
And I got the following error,
Exception due to: Could not open client transport with JDBC Uri:
jdbc:hive2://<host>:10000/;principal=hive/_HOST#YOUR-REALM.COM>:
GSS initiate failed
What am I doing here wrong ? I checked many forums, but couldn't get a proper solution. Any answer will be appreciated.
Thanks
If you were running your code in Linux, I would simply point to that post -- i.e. you must use System properties to define Kerberos and JAAS configuration, from conf files with specific formats.
And you have to switch the debug trace flags to understand subtile configuration issue (i.e. different flavors/versions of JVMs may have different syntax requirements, which are not documented, it's a trial-and-error process).
But on Windows there are additional problems:
the Apache Hive JDBC driver has some dependencies on Hadoop JARs, especially when Kerberos is involved (see that post for details)
these Hadoop JARs require "native libraries" -- i.e. a Windows port of Hadoop (which you have to compile yourself!! or download from an insecure source on the web!!) -- plus System properties hadoop.home.dir and java.library.path pointing to the Hadoop home dir and its bin sub-dir respectively
On the top of that, the Apache Hive driver has compatibility issues -- whenever there are changes in the wire protocol, newer clients cannot connect to older servers.
So I strongly advise you to use the Cloudera JDBC driver for Hive for your Windows clients. The Cloudera site just asks your e-mail.
After that you have a 80+ pages PDF manual to read, the JARs to add to your CLASSPATH, and your JDBC URL to adapt according to the manual.
Side note: the Cloudera driver is a proper JDBC-4.x compliant driver, no need for that legacy Class.forName()...
The key for us when we ran into the problem, was as follows:
On your server there are certain kerberos principals listed that are allowed to operate on the data.
When we tried to run a query via JDBC, we didn't do the proper kinit on the client side.
In this case the solution is obvious:
On the windows client: do a kinit with the proper account before connecting
String url = "jdbc:hive2://<host>:10000/default;principal=hive/_HOST#<YOUR-REALM.COM>"
You should replace <YOUR-REALM.COM> with your real REALM.
I have a legacy application, which connects to the configured Oracle database.
It seems it has some logic that alters the database credentials as it is unable to successfully log in to the Oracle database, while sqlplus started on the same machine is able to log in.
The error I am getting is: [DataDirect][ODBC Oracle Wire Protocol driver][Oracle]ORA-01017: invalid username/password; logon denied
How to find out what is the database username and password that are sent to the database?
What I have tried so far:
Enabled auditing of failed sign-on attempts on Oracle (audit create session whenever not successful). It does not solve the issue, because it only logs the username, which seems to be correct, without the password.
Used a sniffer to eavesdrop the network traffic between the machine running the application and the database, but since Oracle's TNS protocol is encrypted, it did not help a lot.
Started a server using netcat on port X, provided port X in the application configuration file. The application did connect to my server, that is how I know the application is connecting to the correct server. But since the TNS protocol is pretty complex (requires a series of messages to be exchanged between the client and the server) I hope there is a simpler why of achiving what I want without having to reverse engineer Oracle and implementing my own server.
Enabled tracing of the JDBC driver (Trace=1, TraceFile, TraceDll). The trace file shows the correct username, but obviously the password is not getting logged.
My environment:
Database: Oracle 11g
Application runs on: Solaris
Application uses: DataDirect ODBC Oracle Wire Protocol v70
I not sure, but if connection established by ODBC driver (as described in question tags) then you can try ODBC sniffing tools like ODBC Tracing.
Citation:
Password "Sniffing" Using Trace
ODBC provides a means for tracing the conversation taking place between the driver and the host database. Used by developers for testing purposes, the tracing feature is designed to help programmers find out exactly what is going on and to help fix problems. However, tracing (also called "sniffing") can be used by nefarious bad guys to retrieve user passwords.
When tracing is enabled, communications with the host are written to a file. This includes the user ID and password, which are captured in plain text.
Update
SQLPlus connects to Oracle with OCI interface, but DataDirect ODBC driver uses it's own proprietary implementation of communication protocol. So, most probable point of failure is driver misconfiguration or incompatibility.
DataDirect provides some tools for ODBC drivers diagnostics, but only option applicable to case described in question is using snoop utility, which acts like a netcat which already tried.
Because connection failed at credential verification stage, the most probable source of error is using localized symbols for user name or password. There are some issues with Oracle authentication process, listed in DataDirect Knowledge Search (search for ORA-01017).
It seems that DataDirect provides two separate version of driver with and without Unicode support, therefore one of possible points of failure is to connecting with non-Unicode version of driver to Unicode version of database and vice verse.
P.S. For now I don't have any experience with DataDirect ODBC driver. So it's only suggestions about possible source of failure.