Hive error when running from hortonworks sandbox - hadoop

I am following this document to test the sentiment analysis - can someone please help me out -- thanks!!
[root#sandbox ~]# hive -f hiveddl.sql
15/04/12 15:43:23 WARN conf.HiveConf: HiveConf of name hive.optimize.mapjoin.mapreduce does not exist
15/04/12 15:43:23 WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
15/04/12 15:43:23 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
15/04/12 15:43:23 WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.noconditionaltask does not exist
Logging initialized using configuration in file:/etc/hive/conf/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hive/lib/hive-jdbc-0.14.0.2.2.0.0-2041-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Added [json-serde-1.1.6-SNAPSHOT-jar-with-dependencies.jar] to class path
Added resources: [json-serde-1.1.6-SNAPSHOT-jar-with-dependencies.jar]
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.hive.serde2.objectinspector.primitive.AbstractPrimitiveJavaObjectInspector.<init>(Lorg/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils$PrimitiveTypeEntry;)V
#

There is already this issue reported and answered on github:
Github issue link

Related

Install Hive on Windows

I am trying to install hive 3.1.2 on windows 10, hadoop 3.2.2.
I can start hadoop server and start hive shell by run "hive".
First problem is it show a lot of WARN:
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2021-09-09T21:01:22,001 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Found configuration file file:/C:/my_programs/hive_3.1.2/conf/hive-site.xml
2021-09-09T21:01:22,303 WARN [main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.server2.enable.impersonation does not exist
2021-09-09T21:01:23,557 WARN [main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.server2.enable.impersonation does not exist
Hive Session ID = f879881f-c49b-449b-b8cf-81302c585358
Logging initialized using configuration in jar:file:/C:/my_programs/hive_3.1.2/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true
2021-09-09T21:01:25,309 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Created HDFS directory: /tmp/hive/admin/f879881f-c49b-449b-b8cf-81302c585358
2021-09-09T21:01:25,317 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Created local directory: C:/Users/admin/AppData/Local/Temp/admin/f879881f-c49b-449b-b8cf-81302c585358
2021-09-09T21:01:25,325 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Created HDFS directory: /tmp/hive/admin/f879881f-c49b-449b-b8cf-81302c585358/_tmp_space.db
2021-09-09T21:01:25,345 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: f879881f-c49b-449b-b8cf-81302c585358
2021-09-09T21:01:25,345 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Updating thread name to f879881f-c49b-449b-b8cf-81302c585358 main
2021-09-09T21:01:25,383 WARN [f879881f-c49b-449b-b8cf-81302c585358 main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.server2.enable.impersonation does not exist
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
2021-09-09T21:01:53,237 INFO [f879881f-c49b-449b-b8cf-81302c585358 main] CliDriver - Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>
It still run hive shell when I start run
hive> show databases;
it come to error like this:
hive> show databases;
2021-09-09T21:05:01,341 INFO [f879881f-c49b-449b-b8cf-81302c585358 main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: f879881f-c49b-449b-b8cf-81302c585358
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
2021-09-09T21:05:43,092 INFO [f879881f-c49b-449b-b8cf-81302c585358 main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: f879881f-c49b-449b-b8cf-81302c585358
2021-09-09T21:05:43,093 INFO [f879881f-c49b-449b-b8cf-81302c585358 main] org.apache.hadoop.hive.ql.session.SessionState - Resetting thread name to main
hive>
I have read some solution and I think that problem come from hive metastore.
I followed by tutorial that connect derby metastore with hive.
But when I try to run
schematool -dbType derby -initSchema
Window cannot run schematool as a command line.
So I really confuse how can init db to hive, or can do in another way?
Update 2021/9/20:
I have fix all variable in my paths, and right now I got stuck at new problem. The error is quite clear but no solution found on my research:
PS C:\my_programs\hive_3.1.2> .\bin\hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/my_programs/hive_3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/my_programs/hadoop-3.2.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2021-09-20T20:22:34,274 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Found configuration file file:/C:/my_programs/hive_3.1.2/conf/hive-site.xml
2021-09-20T20:22:34,694 WARN [main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.server2.enable.impersonation does not exist
2021-09-20T20:22:37,728 WARN [main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.server2.enable.impersonation does not exist
Hive Session ID = 89fc5e06-2a55-496c-aea0-ab5512839ac3
Logging initialized using configuration in jar:file:/C:/my_programs/hive_3.1.2/lib/hive-exec-3.1.2.jar!/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.getTimeDuration(Ljava/lang/String;JLjava/util/concurrent/TimeUnit;Ljava/util/concurrent/TimeUnit;)J
at org.apache.hadoop.hdfs.client.impl.DfsClientConf.<init>(DfsClientConf.java:248)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:307)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:291)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:173)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:226)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:624)
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:591)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:747)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
PS C:\my_programs\hive_3.1.2>
It seems like error at hdfs config. But I don't know what I need to do with hadoop config, core-site.xml or hdfs-site.xml, or st else. Do we need some config at hadoop side to connect with hive???

Unable to read HiveServer2 configs from ZooKeeper

I use HDP3.1. And I Ambari to deploy hadoop cluster and hive. After deployed, I can run hive in shell successfully. And then I deploy Apache Kylin2.6, it can sync hive table. But when I build the cube, I got the following error:
java.io.IOException: OS command error exit with return code: 1, error message: SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://datacenter1:2181,datacenter2:2181,datacenter3:2181/default;password=hdfs;serviceDiscoveryMode=zooKeeper;user=hdfs;zooKeeperNamespace=hiveserver2
19/02/15 10:04:53 [main]: INFO jdbc.HiveConnection: Connected to datacenter3:10000
19/02/15 10:04:53 [main]: WARN jdbc.HiveConnection: Failed to connect to datacenter3:10000
19/02/15 10:04:53 [main]: ERROR jdbc.Utils: Unable to read HiveServer2 configs from ZooKeeper
Error: Could not open client transport for any of the Server URI's in ZooKeeper: Failed to open new session: java.lang.IllegalArgumentException: Cannot modify dfs.replication at runtime. It is not in list of params that are allowed to be modified at runtime (state=08S01,code=0)
Cannot run commands specified using -e. No current connection
The command is:
hive -e "USE default;
I run hive command in shell. It's success. The connection string is same as the string when run build cube in kylin. I'm confused why it is success in shell but failed in building cube.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://datacenter1:2181,datacenter2:2181,datacenter3:2181/default;password=hdfs;serviceDiscoveryMode=zooKeeper;user=hdfs;zooKeeperNamespace=hiveserver2
19/02/15 12:10:19 [main]: INFO jdbc.HiveConnection: Connected to datacenter3:10000
Connected to: Apache Hive (version 3.1.0.3.1.0.0-78)
Driver: Hive JDBC (version 3.1.0.3.1.0.0-78)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.0.3.1.0.0-78 by Apache Hive
0: jdbc:hive2://datacenter1:2181,datacenter2:>
You can try to add these two properties to hive-site.xml.
<property>
<name>hive.security.authorization.sqlstd.confwhitelist</name>
<value>mapred.*|hive.*|mapreduce.*|spark.*</value>
</property>
<property>
<name>hive.security.authorization.sqlstd.confwhitelist.append</name>
<value>mapred.*|hive.*|mapreduce.*|spark.*</value>
</property>
Finally, I found the root cause. There is 'Cannot modify dfs.replication at runtime.' error message in the error log. Kylin set this property in $KYLIN_HOME/conf/kylin_hive_conf.xml. And when it is running hive command, it will auto append the properties in that file. The final command likes: hive --hiveconf dfs.replication=2 ..........
It looks like that dfs.replication property can't be appened to hive command. I removed this property in kylin_hive_conf.xml. And it works now.

Create input file in HDFS

I'm trying to create an input file in the hdfs using this command :
hduser#salma-SATELLITE-C855-1EQ:/usr/local/hadoop$ ./bin/hadoop fs -mkdir /in
but it gives me an error that the connection failed :
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/10/09 02:12:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
mkdir: Call From salma-SATELLITE-C855-1EQ/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I already run hadoop service with start-all.sh :
hduser#salma-SATELLITE-C855-1EQ:/usr/local/hadoop$ jps
20437 Jps
20030 SecondaryNameNode
19839 DataNode
So can anyone help to solve this problem
Your are missing the namenode ( primary ) this is where you code is attempitng to connect. Check the log to understand why it does not started.

Hive wont run in Hortonworks 2.2.4

I've just downloaded Hortonworks Sandbox 2.2.4, and I noticed when I follow the Hortonwork's tutorial on Hive, I get this,
HCatClienterroroncreatetable: {
"statement": "use default; create table nyse_stocks(`exchange` string, `stock_symbol` string, `date` string, `stock_price_open` float, `stock_price_high` float, `stock_price_low` float, `stock_price_close` float, `stock_volume` bigint, `stock_price_adj_close` float) row format delimited fields terminated by '\\t';",
"error": "unable to create table: nyse_stocks",
"exec": {
"stdout": "",
"stderr": "
15/05/05 09:57:50 WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
15/05/05 09:57:50 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.2-2/hive/lib/hive-jdbc-0.14.0.2.2.4.2-2-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Command was terminated due to timeout(60000ms). See templeton.exec.timeout property",
"exitcode": 143
}
}(error500)
When I ssh into the Sandbox, and I simply type hive on shell, I get this the output inside the stderr,
15/05/05 09:57:50 WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
15/05/05 09:57:50 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.2-2/hive/lib/hive-jdbc-0.14.0.2.2.4.2-2-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
What could be the solution for this?
There is a problem (due to incompatibility) with SerDe jar, inside sandbox.
I had the same problem and after a few googles, I found a solution at
https://github.com/brandonswilson.
Here is what I did:
a-)removed the first line from hiveddl.sql
b-)changed line 34 to:
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
That's it!
(After to run hive -f hiveddl.sql I got a problem at line 119:
-- FAILED: SemanticException Unrecognized file format in STORED AS clause: 'RCFILESE' --
Was: STORED AS RCFilese
I changed to: STORED AS RCFile
You could try to change before to run.)
Daniel Galizi.

Accumulo getting stucked and not starting

I've been trying to install Accumulo and try it for a few days but it gets stocked before even starting. I ended up using HortonWorks Sandbox that comes with Hadoop and Zookeeper installed.
I followed the instruciton on Accmulo setup page and changes the configuration as below:
[root#sandbox ~]# vi /etc/accumulo/conf/accumulo-env.sh
if [ -z "$ACCUMULO_HOME" ]
then
test -z "$ACCUMULO_HOME" && export ACCUMULO_HOME=/usr/hdp/2.2.0.0-2041/accumulo
fi
if [ -z "$HADOOP_HOME" ]
then
test -z "$HADOOP_PREFIX" && export HADOOP_PREFIX=/usr/hdp/current/hadoop-client
else
HADOOP_PREFIX="$HADOOP_HOME"
unset HADOOP_HOME
fi
# hadoop-1.2:
# test -z "$HADOOP_CONF_DIR" && export HADOOP_CONF_DIR="$HADOOP_PREFIX/conf"
# hadoop-2.0:
test -z "$HADOOP_CONF_DIR" && export HADOOP_CONF_DIR="$HADOOP_PREFIX/etc/hadoop/conf"
test -z "$JAVA_HOME" && export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
test -z "$ZOOKEEPER_HOME" && export ZOOKEEPER_HOME=/usr/hdp/current/zookeeper-client
test -z "$ACCUMULO_LOG_DIR" && export ACCUMULO_LOG_DIR=$ACCUMULO_HOME/logs
if [ -f ${ACCUMULO_CONF_DIR}/accumulo.policy ]
then
POLICY="-Djava.security.manager -Djava.security.policy=${ACCUMULO_CONF_DIR}/accumulo.policy"
fi
Using this configuration, I was able to start Accumulo successfully:
[root#sandbox ~]# /usr/hdp/2.2.0.0-2041/accumulo/bin/start-all.sh
Starting monitor on localhost
WARN : Max open files on localhost is 1024, recommend 32768
Starting tablet servers .... done
Starting tablet server on localhost
WARN : Max open files on localhost is 1024, recommend 32768
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/reza/accumulo-1.6.1/lib/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-02-09 05:30:21,742 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on system reset or power loss
2015-02-09 05:30:21,760 [server.Accumulo] INFO : Attempting to talk to zookeeper
2015-02-09 05:30:21,920 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS
2015-02-09 05:30:22,195 [server.Accumulo] INFO : Connected to HDFS
2015-02-09 05:30:22,207 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on system reset or power loss
Starting master on localhost
WARN : Max open files on localhost is 1024, recommend 32768
Starting garbage collector on localhost
WARN : Max open files on localhost is 1024, recommend 32768
Starting tracer on localhost
WARN : Max open files on localhost is 1024, recommend 32768
I was also able to initialize Accumulo successfully:
[root#sandbox ~]# /usr/hdp/2.2.0.0-2041/accumulo/bin/accumulo init
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/reza/accumulo-1.6.1/lib/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-02-09 04:40:56,225 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on system reset or power loss
2015-02-09 04:40:56,229 [init.Initialize] INFO : Hadoop Filesystem is hdfs://sandbox.hortonworks.com:8020
2015-02-09 04:40:56,232 [init.Initialize] INFO : Accumulo data dirs are [hdfs://sandbox.hortonworks.com:8020/accumulo]
2015-02-09 04:40:56,232 [init.Initialize] INFO : Zookeeper server is localhost:2181
2015-02-09 04:40:56,232 [init.Initialize] INFO : Checking if Zookeeper is available. If this hangs, then you need to make sure zookeeper is running
Warning!!! Your instance secret is still set to the default, this is not secure. We highly recommend you change it.
You can change the instance secret in accumulo by using:
bin/accumulo org.apache.accumulo.server.util.ChangeSecret oldPassword newPassword.
You will also need to edit your secret in your configuration file by adding the property instance.secret to your conf/accumulo-site.xml. Without this accumulo will not operate correctly
Instance name : reza
Instance name "reza" exists. Delete existing entry from zookeeper? [Y/N] : N
Instance name : reza2
Enter initial password for root (this may not be applicable for your security setup): ****
Confirm initial password for root: ****
2015-02-09 04:42:32,497 [Configuration.deprecation] INFO : dfs.replication.min is deprecated. Instead, use dfs.namenode.replication.min
2015-02-09 04:42:32,509 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on system reset or power loss
2015-02-09 04:42:33,208 [Configuration.deprecation] INFO : dfs.block.size is deprecated. Instead, use dfs.blocksize
2015-02-09 04:42:33,964 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.security.handler.ZKAuthorizor
2015-02-09 04:42:33,969 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.security.handler.ZKAuthenticator
2015-02-09 04:42:33,973 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.security.handler.ZKPermHandler
However, when I try to start an Accumulo shell to try it, it gets stucked without throwing any error:
[root#sandbox ~]# /usr/bin/accumulo shell --password reza
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/reza/accumulo-1.6.1/lib/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-02-09 05:21:15,397 [impl.ServerClient] WARN : There are no tablet servers: check that zookeeper and accumulo are running.
It does not progress any further does not give me a shell to try. Any advise on how to fix this or what the problem is?
You probably have to specify the instance name and ZooKeeper host information. Launch the shell with --help to find the correct command-line options.
Note: You can debug the Accumulo shell by launching it with --debug on the command line.
Did you start Accumulo before trying to launch the shell?
/usr/hdp/2.2.0.0-2041/accumulo/bin/start-all.sh
And then
/usr/bin/accumulo shell --password reza

Resources