Mapreduce job history server and yarn web UIs do not respect HTTPS_ONLY policy - hadoop

I am using hadoop 2.7.2 and have configured HTTPS for yarn and job history server web UIs but the UIs are still served as HTTP and not HTTPS.
I have set up key and trust stores and configuring ssl-server.xml and ssl-client.xml. In addition to that, I have put the following properties in mapred-site.xml using ambari:
mapreduce.jobhistory.http.policy=HTTPS_ONLY
mapreduce.jobhistory.webapp.https.address=JHS:19889
mapreduce.jobhistory.webapp.address=JHS:19889
When I access the https url https://JHS:19889, I receive the following error:
SSL received a record that exceeded the maximum permissible length. Error code: SSL_ERROR_RX_RECORD_TOO_LONG
The above error is because the job history server is listening for http connections and not https.
When I access the same url with http i.e. http://JHS:19889, I can see the job history server web ui. Same thing happens for yarn's resource manager web UI after having made the following configuration:
yarn.http.policy=HTTPS_ONLY
yarn.log.server.url=https://JHS:19889/jobhistory/logs
yarn.resourcemanager.webapp.https.address=RM:8090
yarn.nodemanager.webapp.https.address=0.0.0.0:8090
How can I make the yarn and job history server web UIs make available on HTTPS?

Map Reduce and YARN are part of the Hadoop project so to enable SSL you need to turn on SSL in Hadoop in the core-site.xml.
hadoop.ssl.enabled=true
Then there are some more settings (search for hadoop.ssl) that you might need but that's the main one.

Related

How to set up tez ui with remote timeline server url in config.js?

Currently i am trying to configure tez-ui. I have put the tez ui web server on tomcat and running it as a service. When i am editing the timeline url with ip:8188 instead of localhost in /opt/tomcat/webapps/tez-ui/config/config.js file, it is throwing error Adapter operation failed » Timeline server (ATS) is out of reach. Either it's down, or CORS is not enabled.
I have notice one thing tez ui is requesting the url http://localhost:8188/ws/v1/timeline/TEZ_DAG_ID instead after editing timeline url in config.js it should point http://ip:8188/ws/v1/timeline/TEZ_DAG_ID. Can anyone guide on why it is happening ?

Nifi - Anonymous authentication has not been configured

Im trying to get system-diagnostics data of my nifi from it self. But i have "Anonymous authentication has not been configured." error. Everything used to work, but after the last restart of nifi, this error occurred. The screenshots below show the InvokeHTTP processor configurations
part1
part2
By default, anonymous authentication is turned off. To enable it, it must be done in the configuration files.
I have the same issue when trying to connect spark streaming to an output port in nifi. First i have this warn in the "nifi-user.log" :
2022-05-21 12:28:49,465 WARN [NiFi Web Server-100] o.a.n.w.s.NiFiAuthenticationFilter Authentication Failed 127.0.0.1 GET https://localhost:8443/nifi-api/site-to-site [Anonymous authentication has not been configured.]
and an error in the Spark Streaming output saying that Spark Streaming can not connect to Nifi.
After changing "nifi.security.allow.anonymous.authentication" in the "nifi.properties" file in the conf directory to "true" value, the warn and the error are gone and now Spark Streaming should connect to nifi server.

Get remote errors in Service Fabric using Web Api

Web API has GlobalConfiguration.Configuration.IncludeErrorDetailPolicy
= IncludeErrorDetailPolicy.Always; to turn on remote errors. (Allowing them to see them in a browser even if you are not browsing on the local machine.
But, near as I can tell, Service Fabric, running Web Api, does not support GlobalConfiguration.
Is there a way to configure things so I don't have to log into one of my Service Fabric server machines each time I want to see what a services error message is?
I recommend you don't show error details to everyone.
It's a security risk.
Consider moving your error logs out of your cluster. For instance, by using OMS, ELK or Application Insights.

What should be in path field of HTTP Authorization manager server health monitoring of Windows server via J meter

As "/manager/status" given for Tomcat server in HTTP Authorization manager. How can we get for windows server monitoring
If "Windows" stands for "IIS", I'm afraid you won't be able to use Monitor Results listener.
As per Building a Monitor Test Plan article:
The monitor was designed to work with the status servlet in Tomcat 5. In theory, any servlet container that supports JMX (Java Management Extension) can port the status servlet to provide the same information.
So for IIS you might want to consider using its own performance counters instead. Check out How to monitor Web server performance by using counter logs in System Monitor in IIS article to get an overall idea on setting this up.
If you want a platform-independent and JMeter-integrated solution you can also consider Servers Performance Monitoring (aka PerfMon) plugin which is absolutely cross-platform and returns much more information than you can get via JMX MBeans. Plugin installation and usage is described in details in the How to Monitor Your Server Health & Performance During a JMeter Load Test

Failed to be authorized from yarn resource manager webapp under kerberos

The authorized failure is happened when accessing yarn resource manager web UI by chrome browser with kerberos spnego (yarn.resourcemanager.webapp.address:8088/cluster).
The failure is shown like:
"HTTP ERROR 403 Problem accessing /cluster. Reason: GSSException: Failure unspecified at GSS-API level (Mechanism level: Request is a replay (34))"
PS. It is successfully to access others (namenode, jobhistory etc) web UI, but yarn resource manager by chrome browser with kerberos spnego
Hadoop is 2.5.2
May someone help me to check this problem.
The problem can be resolved by setting :
"yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled=false"
in yarn-site.xml of Hadoop-2.5.2
The YARNAuthenticationFilter can be ignored by "false" value from the webapp default request filter chain: chain=NoCacheFilter->NoCacheFilter->safety->YARNAuthenticationFilter->authentication->guice->default
TO the one as: chain=NoCacheFilter->NoCacheFilter->safety->authentication->guice->default
Usually you get this when working with a kerberized cluster installation when you have not instructed your browser to use Kerberos authentication connecting to some domains - this is needed for some Hadoop webapps.
For Chrome on OSX for example just type into your console:
defaults write com.google.Chrome AuthServerWhitelist "*.domain.realm"
defaults write com.google.Chrome AuthNegotiateDelegateWhitelist  "*.domain.realm"
where domain.realm is the [domain_realm] entry from your Kerberos configuration file /etc/krb5.conf.

Resources