apache nifi 1.7.1 restart issue on windows machine - apache-nifi

I am using apache nifi on a Apache hadoop cluster and when i try to start the run-start.sh file i get below error.
environment: WINDOWS
2018-08-23 20:00:31,856 WARN [main] org.apache.nifi.bootstrap.Command Failed to set permissions so that only the owner can read pid file D:\NIFI\nifi-1.7.1-bin\nifi-1.7.1\bin\.
.\run\nifi.pid; this may allows others to have access to the key needed to communicate with NiFi. Permissions should be changed so that only the owner can read this file
2018-08-23 20:00:31,866 WARN [main] org.apache.nifi.bootstrap.Command Failed to set permissions so that only the owner can read status file D:\NIFI\nifi-1.7.1-bin\nifi-1.7.1\bi
n\..\run\nifi.status; this may allows others to have access to the key needed to communicate with NiFi. Permissions should be changed so that only the owner can read this file
2018-08-23 20:00:31,879 INFO [main] org.apache.nifi.bootstrap.Command Launched Apache NiFi with Process ID 9888
and unable to connect from front. this happen when i shut down and try to reconnect again. i generally stopping server using ctrl+_c on windows.
kindly help

Related

How to set hadoop class parameters in Hive Similar to Pig shown here?

I want Hive to automatically acquire kerberos ticket whenever hive(More specifically hive-shell not hive-server) is executed and also renew it automatically in between if job run more then timeout of ticket.
I found similar functionality in Pig. See This. I tested and it is working it acquires ticket automatically from keytab I don't have to acquire it manually using kinit and then start job. It renews tickets also whenever needed as mentioned in doc.
On some research I came across user-name-handling-in-hadoop. I found out similar log statement of dumping configuration parameters of class UserGroupInformation when starting hive.
As i wanted it every time hive is executed I tried it putting in HADOOP_OPTS which looks like this
export HADOOP_OPTS="$HADOOP_OPTS -Djava.security.krb5.conf=/etc/krb5.conf -Dhadoop.security.krb5.principal=root#MSI.COM -Dhadoop.security.krb5.keytab=/etc/security/keytab/user.service.keytab"
but whenever I execute it. It dumps following parameters which means it is not considering principle and keytab may be property name can be wrong since I have used names I found in Pig. It is observed that krb5.conf property is taken into consider as changing name of conf file shows default realm can't found as it is not able to read correct conf file.
23/01/23 23:33:28 DEBUG security.UserGroupInformation: hadoop login commit
23/01/23 23:33:28 DEBUG security.UserGroupInformation: using kerberos user:null
23/01/23 23:33:28 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
23/01/23 23:33:28 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: root" with name root
23/01/23 23:33:28 DEBUG security.UserGroupInformation: User entry: "root"
23/01/23 23:33:28 DEBUG security.UserGroupInformation: Assuming keytab is managed externally since logged in from subject.
23/01/23 23:33:28 DEBUG security.UserGroupInformation: UGI loginUser:root (auth:KERBEROS)
For any guidance Thanks in Advance
I ultimately want whenever hive-shell or hive-cli is invoked it automatically request for Kerberos ticket and renew it if needed.

Ambari won't restart: DB check failed

When I restarted my cluster, ambari didn't start because of a db check failed config:
sudo service ambari-server restart --skip-database-check
Using python /usr/bin/python
Restarting ambari-server
Waiting for server stop...
Ambari Server stopped
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari Server is starting with the database consistency check skipped. Do not make any changes to your cluster topology or perform a cluster upgrade until you correct the database consistency issues. See "/var/log/ambari-server/ambari-server-check-database.log" for more details on the consistency issues.
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start.....................
DB configs consistency check failed. Run "ambari-server start --skip-database-check" to skip. You may try --auto-fix-database flag to attempt to fix issues automatically. If you use this "--skip-database-check" option, do not make any changes to your cluster topology or perform a cluster upgrade until you correct the database consistency issues. See /var/log/ambari-server/ambari-server-check-database.log for more details on the consistency issues.
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process has stopped. Please check the logs for more information.
I looked in the logs in "/var/log/ambari-server/ambari-server-check-database.log", and I saw:
2017-08-23 08:16:13,445 INFO - Checking Topology tables
2017-08-23 08:16:13,447 ERROR - Your topology request hierarchy is not complete for each row in topology_request should exist at least one raw in topology_logical_request, topology_host_request, topology_host_task, topology_logical_task.
I tried both options --auto-fix-database and --skip-database-check, it didn't work.
It seems that postgresql didn't start correctly, and even if in the log of Ambari there was no mention of postgresql not started or not available, but it was weird that ambari couldn't access to the topology configuration stored in it.
sudo service postgresql restart
Stopping postgresql service: [ OK ]
Starting postgresql service: [ OK ]
It did the trick:
sudo service ambari-server restart
Using python /usr/bin/python
Restarting ambari-server
Ambari Server is not running
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start.........
Server started listening on 8080

How to specify the address of ResourceManager to bin/yarn-session.sh?

I am a newbie in Flink.
I'm confused about how to specify the address of ResourceManager when run bin/yarn-session.sh?
When starting a Flink Yarn session via bin/yarn-session.sh then it will create a .yarn-properties-USER file in your tmp directory. This file will contain the connection information for the Flink cluster. When trying to submit a job via bin/flink run <JOB_JAR>, the client will use the connection information from this file.

Hadoop NFS unable to start the Hadoop NFS gateway

I am trying to install the NFS gateway on a Hadoop cluster.
Unfortunately I am not able to start the nfs gateway with the following Error.
I have also tried to add more debugging info by modifying the log4j file to include "Debug" info. the Log4j file does not seem to be affecting the output. So I also need to know how to increase the logging level.
************************************************************/
14/05/22 10:59:43 INFO nfs3.Nfs3Base: registered UNIX signal handlers for [TERM, HUP, INT]
Exception in thread "main" java.lang.IllegalArgumentException: value already present: sshd
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:115)
at com.google.common.collect.AbstractBiMap.putInBothMaps(AbstractBiMap.java:112)
at com.google.common.collect.AbstractBiMap.put(AbstractBiMap.java:96)
at com.google.common.collect.HashBiMap.put(HashBiMap.java:85)
at org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMapInternal(IdUserGroup.java:85)
at org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMaps(IdUserGroup.java:110)
at org.apache.hadoop.nfs.nfs3.IdUserGroup.<init>(IdUserGroup.java:54)
at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.<init>(RpcProgramNfs3.java:172)
at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.<init>(RpcProgramNfs3.java:164)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.<init>(Nfs3.java:41)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:52)
14/05/22 10:59:45 INFO nfs3.Nfs3Base: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down Nfs3 at
************************************************************/
I suspected it is related to the following Issue https://issues.apache.org/jira/browse/HDFS-5587, however I do not understand from this issue what action I need to take.
This is documented in the following Ticket, with workaround below:
https://issues.apache.org/jira/browse/HDFS-5587
Issue in my case was that sshd, and some other users existed in both the ldap and local box, but the UIDs did not match.
NFS gateway can't start with duplicate name or id on the host system.
This is because HDFS (non-kerberos cluster) uses name as the only way
to identify a user or group. The host system with duplicated
user/group name or id might work fine most of the time by itself.
However when NFS gateway talks to HDFS, HDFS accepts only user and
group name. Therefore, same name means the same user or same group. To
find the duplicated names/ids, one can do: and on Linux systms, and on MacOS.

Error occured when using HDFS to store the data of HBase

When I set hbase.rootdir configuration in hbase-site.xml to local filesystem like file://hbase_root_dir_path, hbase worked OK.But when I change it to hdfs://localhost:9000/hbase, hbase was also OK at the beginning. After a short time(usually a few seconds), however, it didn't work.I found the HMaster stopped with jps command.Of course I could not open the localhost:60010 web page.I read the log, and found sth wrong like the following:
INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x13e35b26eb80001 type:delete cxid:0x13 zxid:0xc txntype:-1 reqpath:n/a Error Path:/hbase/backup-masters/localhost,35320,1366700487007 Error:KeeperErrorCode = NoNode for /hbase/backup-masters/localhost,35320,1366700487007
INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2182. Will not attempt to authenticate using SASL (unknown error)
ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of region=person,,1365998702159.a5af90c23325829096517fb3b15bca17., starting to roll back the global memstore size.
java.lang.IllegalStateException: Could not instantiate a region instance.
WARN org.apache.zookeeper.ClientCnxn: Session 0x13e35b26eb80002 for server null, unexpected error, closing socket connection and attempting reconnect
I use the pseudo-distributed mode of hbase in Ubuntu 12.04 LTS.
In my /etc/hosts, I have already changed the the IP of hostname to 127.0.0.1.And my hadoop safemode status if OFF.My hadoop version is 1.0.4 and my hbase version is 0.94.6.1(both are the latest stable release), the HBase Reference guide says hbase-0.94.x works fine with hadoop-1.0.x.
I think sth about the HDFS results the problem, because it really works with the local filesystem.By the way, there is a hbase-x.x.x-security release, what's the difference between it and hbase-x.x.x release and do I need to use the security release?
Dit you set your Zookeeper quorum? It seems Zookeeper is trying to connect to your localhost.
Try setting the addresses of the machines you wan't to use using the hbase.zookeeper.quorum property in hbase-site.xml. Also, if you're not managing your own Zookeeper instance make sure that in hbase-env.sh this line isn't commented export HBASE_MANAGES_ZK=true.

Resources