Im trying to get system-diagnostics data of my nifi from it self. But i have "Anonymous authentication has not been configured." error. Everything used to work, but after the last restart of nifi, this error occurred. The screenshots below show the InvokeHTTP processor configurations
part1
part2
By default, anonymous authentication is turned off. To enable it, it must be done in the configuration files.
I have the same issue when trying to connect spark streaming to an output port in nifi. First i have this warn in the "nifi-user.log" :
2022-05-21 12:28:49,465 WARN [NiFi Web Server-100] o.a.n.w.s.NiFiAuthenticationFilter Authentication Failed 127.0.0.1 GET https://localhost:8443/nifi-api/site-to-site [Anonymous authentication has not been configured.]
and an error in the Spark Streaming output saying that Spark Streaming can not connect to Nifi.
After changing "nifi.security.allow.anonymous.authentication" in the "nifi.properties" file in the conf directory to "true" value, the warn and the error are gone and now Spark Streaming should connect to nifi server.
Related
I am currently working in a AEM 6.5 environment trying to setup the XTM Translation Connector. I have successfully configured this on my local environment by doing the following:
Installing the XTM Translation Connector content package
Configuring credential (Web Service URI, XTM Client Name, User ID, Password) into /mnt/overlay/cq/translation/cloudservices/editor.html/conf/corp/settings/cloudconfigs/translation/xtm/xtm-translation
When I click the Verify button I am prompted with the message:
Connection parameters correct.
That said when I follow these same steps in my dev, stage, prod environments in a different network than my local, I am prompted with the message:
Connection parameters incorrect.
I dove into the logs and found:
[com.xtm.translation.connector.xtm-for-aem.core:1.5.2.SNAPSHOT]
...
Caused by: java.net.SocketTimeoutException: SocketTimeoutException invoking Web_Service_URI: connect timed out
I happen to know that this network uses a proxy server for external connections and reaching out to the internet. So I tried configuring the Apache HTTP Components Proxy Configuration in /system/console/configMgr and then testing the XTM Translation Connector connection again but it doesn't even seems like the XTM Translation Connector is even trying to use the proxy when it tries connecting based on error.log messages.
How can I get this XTM Translation Connector to use this proxy ?
Any thoughts on this are welcomed.
Thanks.
I have a Hadoop cluster running Hortonworks Data Platform 2.4.2 which has been running well for more than a year. The cluster is Kerberised and external applications connect via Knox. Earlier today, the cluster stopped accepting JDBC connections via Knox to Hive.
The Knox logs show no errors, but the Hive Server2 log shows the following error:
"Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User: knox is not allowed to impersonate org.apache.hive.service.cli.HiveSQLException: Failed to validate proxy privilege of knox for "
Having looked at other users the suggestions mostly seem to be around the correct setting of configuration options for hadoop.proxyusers.users and hadoop.proxyusers.groups.
However, in my case I don't see how these settings could be the problem. The cluster has been running for over a year and we have a number of applications connecting to Hive via JDBC on a daily basis. The configuration of the server has not been changed and connections were previously succeeding on the current configuration. No changes had been made to the platform or environment and the cluster was not restarted or taken down for maintenance between the last successful JDBC connection and JDBC connections being declined.
I have now stopped and started the cluster, but after restart the cluster still does not accept JDBC connections.
Does anyone have any suggestions on how I should proceed?
Do you have Hive Impersonation turned on?
hive.server2.enable.doAs=true
This could be the issue assuming hadoop.proxyusers.users and hadoop.proxyusers.groups are set properly.
Also, check whether the user 'knox' exist on Hive Server2 node (and others used for impersonation).
The known work around seems to be to set:
hadoop.proxyuser.knox.groups = *
hadoop.proxyuser.knox.hosts = *
I have yet to find a real fix that lets you keep this layer of added security.
I am invoking an API command (nifi-api/access/token) to get the access token. But i am getting error like this java.net.SSLHandShakeException unable to find valid certification path for requested target. we have LDAP configured in NiFi Cluster and i am able to login to NiFi UI using my credentials. I have started exploring the NiFi rest API for the first time. Any help would be appreciated !!
(P.S I want to use rest api by codes and native processors ( i can do in simple nifi which i have on my desktop) how can i make my task on nifi with kerberso autentification?
Thank you in Advance.
import certificate into truststore.jks using keytool. then in the invokehttp processor use SSL Context Service that should point to your truststore.jks
I am using hadoop 2.7.2 and have configured HTTPS for yarn and job history server web UIs but the UIs are still served as HTTP and not HTTPS.
I have set up key and trust stores and configuring ssl-server.xml and ssl-client.xml. In addition to that, I have put the following properties in mapred-site.xml using ambari:
mapreduce.jobhistory.http.policy=HTTPS_ONLY
mapreduce.jobhistory.webapp.https.address=JHS:19889
mapreduce.jobhistory.webapp.address=JHS:19889
When I access the https url https://JHS:19889, I receive the following error:
SSL received a record that exceeded the maximum permissible length. Error code: SSL_ERROR_RX_RECORD_TOO_LONG
The above error is because the job history server is listening for http connections and not https.
When I access the same url with http i.e. http://JHS:19889, I can see the job history server web ui. Same thing happens for yarn's resource manager web UI after having made the following configuration:
yarn.http.policy=HTTPS_ONLY
yarn.log.server.url=https://JHS:19889/jobhistory/logs
yarn.resourcemanager.webapp.https.address=RM:8090
yarn.nodemanager.webapp.https.address=0.0.0.0:8090
How can I make the yarn and job history server web UIs make available on HTTPS?
Map Reduce and YARN are part of the Hadoop project so to enable SSL you need to turn on SSL in Hadoop in the core-site.xml.
hadoop.ssl.enabled=true
Then there are some more settings (search for hadoop.ssl) that you might need but that's the main one.
The authorized failure is happened when accessing yarn resource manager web UI by chrome browser with kerberos spnego (yarn.resourcemanager.webapp.address:8088/cluster).
The failure is shown like:
"HTTP ERROR 403 Problem accessing /cluster. Reason: GSSException: Failure unspecified at GSS-API level (Mechanism level: Request is a replay (34))"
PS. It is successfully to access others (namenode, jobhistory etc) web UI, but yarn resource manager by chrome browser with kerberos spnego
Hadoop is 2.5.2
May someone help me to check this problem.
The problem can be resolved by setting :
"yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled=false"
in yarn-site.xml of Hadoop-2.5.2
The YARNAuthenticationFilter can be ignored by "false" value from the webapp default request filter chain: chain=NoCacheFilter->NoCacheFilter->safety->YARNAuthenticationFilter->authentication->guice->default
TO the one as: chain=NoCacheFilter->NoCacheFilter->safety->authentication->guice->default
Usually you get this when working with a kerberized cluster installation when you have not instructed your browser to use Kerberos authentication connecting to some domains - this is needed for some Hadoop webapps.
For Chrome on OSX for example just type into your console:
defaults write com.google.Chrome AuthServerWhitelist "*.domain.realm"
defaults write com.google.Chrome AuthNegotiateDelegateWhitelist "*.domain.realm"
where domain.realm is the [domain_realm] entry from your Kerberos configuration file /etc/krb5.conf.