Kettle - pan.sh "No repository provided, can't load transformation" - bash

I've created a kettle transformation and i've tested on my pc and it works. However, i've insered it in the server and starting as bash script by pan.sh. It was working but after few times it started to give this problem.
server$ bash pan.sh file="API_Mining_LatestVersion.ktr"
#######################################################################
WARNING: no libwebkitgtk-1.0 detected, some features will be unavailable
Consider installing the package with apt-get or yum.
e.g. 'sudo apt-get install libwebkitgtk-1.0-0'
#######################################################################
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
14:56:00,682 INFO [KarafBoot] Checking to see if org.pentaho.clean.karaf.cache is enabled
14:56:00,803 INFO [KarafInstance]
*******************************************************************************
*** Karaf Instance Number: 2 at /data/Fernando/data-integration_updated/./s ***
*** ystem/karaf/caches/pan/data-1 ***
*** FastBin Provider Port:52902 ***
*** Karaf Port:8803 ***
*** OSGI Service Port:9052 ***
*******************************************************************************
Nov 20, 2018 2:56:01 PM org.apache.karaf.main.Main$KarafLockCallback lockAquired
INFO: Lock acquired. Setting startlevel to 100
*ERROR* [org.osgi.service.cm.ManagedService, id=255, bundle=53/mvn:org.apache.aries.transaction/org.apache.aries.transaction.manager/1.1.1]: Updating configuration org.apache.aries.transaction caused a problem: null
org.osgi.service.cm.ConfigurationException: null : null
at org.apache.aries.transaction.internal.TransactionManagerService.<init>(TransactionManagerService.java:136)
at org.apache.aries.transaction.internal.Activator.updated(Activator.java:63)
at org.apache.felix.cm.impl.helper.ManagedServiceTracker.updateService(ManagedServiceTracker.java:148)
at org.apache.felix.cm.impl.helper.ManagedServiceTracker.provideConfiguration(ManagedServiceTracker.java:81)
at org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceUpdate.provide(ConfigurationManager.java:1448)
at org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceUpdate.run(ConfigurationManager.java:1404)
at org.apache.felix.cm.impl.UpdateThread.run(UpdateThread.java:103)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.objectweb.howl.log.LogConfigurationException: Unable to obtain lock on /data/Fernando/data-integration/system/karaf/caches/pan/data-1/txlog/transaction_1.log
at org.objectweb.howl.log.LogFile.open(LogFile.java:191)
at org.objectweb.howl.log.LogFileManager.open(LogFileManager.java:784)
at org.objectweb.howl.log.Logger.open(Logger.java:304)
at org.objectweb.howl.log.xa.XALogger.open(XALogger.java:893)
at org.apache.aries.transaction.internal.HOWLLog.doStart(HOWLLog.java:233)
at org.apache.aries.transaction.internal.TransactionManagerService.<init>(TransactionManagerService.java:133)
... 7 more
2018-11-20 14:56:04.508:INFO:oejs.Server:jetty-8.1.15.v20140411
2018-11-20 14:56:04.544:INFO:oejs.AbstractConnector:Started NIOSocketConnectorWrapper#0.0.0.0:9052
[...]
INFO: New Caching Service registered
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/Fernando/data-integration_updated/launcher/../lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/Fernando/data-integration_updated/plugins/pentaho-big-data-plugin/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2018/11/20 14:56:09 - Pan - Start of run.
ERROR: No repository provided, can't load transformation.
I don't understand where the problem is. The transformation file hasn't been changed and it contains also repo, user and pass paramethers.

Related

How to run Quicksql?

As I am running the Quicksql, an error arises which is as shown blew.
The link:
https://github.com/Qihoo360/Quicksql/blob/master/doc/BUILD_doc.md
ERROR StatusLogger No Log4j 2 configuration file found. Using default configuration (logging only errors to the console), or user programmatically provided configurations. Set system property 'log4j2.debug' to show Log4j 2 internal initialization logging. See https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions on how to configure Log4j 2
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.ReflectionUtil (file:/usr/local/qsql-0.6/lib/netty-common-4.1.16.Final.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.ReflectionUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Elasticsearch Embedded Server is starting up, waiting....
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/qsql-0.6/lib/slf4j-log4j12-1.7.13.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/wangjinsa/Downloads/Compressed/spark-2.4/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Elasticsearch Embedded Server has started!! Your query is running...
Input: SELECT * FROM depts INNER JOIN (SELECT * FROM student WHERE city in ('FRAMINGHAM', 'BROCKTON', 'CONCORD')) FILTERED ON depts.name = FILTERED.type
.........

Unable to start hive using tez execution engine

Im using Hadoop 2.7.3 version and hive 1.2.1 version.
I face problem with hive using tez engine. Is there any setup error or other kind of error ??
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-1.2.1.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hduser/tez/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type
[org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException:org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown.
Application application_1568628322588_0002 failed 2 times due to AM Container for appattempt_1568628322588_0002_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://rohan-VirtualBox:8088/cluster/app/application_1568628322588_0002Then, click on links to logs of each attempt.
Diagnostics: Exception from
container-launch.
Container id: container_1568628322588_0002_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: at
org.apache.hadoop.util.Shell.runCommand(Shell.java:582) at
org.apache.hadoop.util.Shell.run(Shell.java:479) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
This is likely similar to Application failed 2 times due to AM Container: exited with exitCode: 1.
Essentially the code you pasted does not contain the actual error code - so we cannot help much with it. You need to find the exact error message, by going to your Resource Manager and looking at the logs there.

Failed to initialize schema for HiveServer2 in Apache Hive 3.0.0 on Cygwin (Windows 10)

I already had a Hadoop 3.0.0 cluster consisting of 2 machine: 1 namenode + RM and 1 datanode. I tried to install Apache Hive 3.0.0 by following this document.
When I run schematool -dbType derby -initSchema --verbose on Cygwin, an exception was thrown:
$ schematool -dbType derby -initSchema --verbose
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/BigSol/apache-hive-3.0.0-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/BigSol/hadoop-3.0.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User: APP
Starting metastore schema initialization to 3.0.0
org.apache.hadoop.hive.metastore.HiveMetaException: Unknown version specified for initialization: 3.0.0
org.apache.hadoop.hive.metastore.HiveMetaException: Unknown version specified for initialization: 3.0.0
at org.apache.hadoop.hive.metastore.MetaStoreSchemaInfo.generateInitFileName(MetaStoreSchemaInfo.java:137)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:580)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:562)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1445)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
*** schemaTool failed ***
When viewing the line of code that thrown the exception, I found that Hive tried to find a SQL schema located at %HIVE_HOME%\scripts\metastore\upgrade\derby\hive-schema-3.0.0.derby.sql.
I doubt that Cygwin messed up the path so that Hive didn't find that schema.
My questions:
How can I correct the path (or fix the problem)?
Are there batch files equivalent to *.sh files in %HIVE_HOME%\bin directory as Hive 2.1.1 have?
I found the solution. After running schematool on a Linux machine and copied metastore_db directory to Windows machine, I managed to start HiveServer2 but the beeline CLI said that the jar in C:\cygdrive\c\BigSol\apache-hive-3.0.0-bin\lib\hive-beeline-3.1.0.jar was not found.
It turned out that java in Cygwin parse the wrong path. I made a symbolic link from C:\cygdrive\c to C:\ and it worked.

Why does Zeppelin 0.6.2 notebook fail with "The input line is too long" with Spark 2.0 on Windows?

Am running Zeppelin 0.6.2 in windows with Spark 2.0
SPARK_HOME=C:\Users\anbarasu.r\Desktop\Archive\spark-2.0.0-bin-hadoop2.6
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m;support was removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/Users/anbarasu.r/Desktop/Archive/zeppelin-0.6.2-bin-all/zeppelin 0.6.2-bin-all/lib/slf4j-log4j12 1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/Users/anbarasu.r/Desktop/Archive/zeppelin-0.6.2-bin-all/zeppelin-0.6.2-bin-all/lib/zeppelin-interpreter-0.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Nov 19, 2016 1:48:15 PM com.sun.jersey.api.core.PackagesResourceConfig init INFO: Scanning for root resource and provider classes in the packages: org.apache.zeppelin.rest
Nov 19, 2016 1:48:15 PM com.sun.jersey.api.core.ScanningResourceConfig logClasses
INFO: Root resource classes found:
class org.apache.zeppelin.rest.ZeppelinRestApi
class org.apache.zeppelin.rest.ConfigurationsRestApi
class org.apache.zeppelin.rest.CredentialRestApi
class org.apache.zeppelin.rest.NotebookRestApi
class org.apache.zeppelin.rest.LoginRestApi
class org.apache.zeppelin.rest.InterpreterRestApi
class org.apache.zeppelin.rest.SecurityRestApi`
Nov 19, 2016 1:48:15 PM com.sun.jersey.api.core.ScanningResourceConfig init
INFO: No provider classes found.
Nov 19, 2016 1:48:15 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.13 06/29/2012 05:14 PM'
Nov 19, 2016 1:48:18 PM com.sun.jersey.spi.inject.Errors processErrorMessages
WARNING: The following warnings have been detected with resource and/or provider classes:
WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.zeppelin.rest.InterpreterRestApi.listInterpreter(java.lang.String), should not consume any entity.
WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.zeppelin.rest.NotebookRestApi.createNote(java.lang.String) throws java.io.IOException, with URI template, "/", is treated as a resource method
WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.zeppelin.rest.NotebookRestApi.getNotebookList() throws java.io.IOException, with URI template, "/", is treated as a resource method
SPARK_HOME=C:\Users\anbarasu.r\Desktop\Archive\spark-2.0.0-bin-hadoop2.6
The input line is too long.
The error
The input line is too long
appears in the command line when I run any command on the notebook.
But Zeppelin 0.6.2 work perfectly with Spark 1.6.2
Please suggest a way to make Zeppelin 0.6.2 work with Spark 2.0.0.
The reason for the error is among the "niceties" that make people "love" Windows so much :) It used to have a very poor command line support.
(I said "used to have" as things have changed recently with bash and Docker support)
I strongly suggest installing Spark and Zeppelin in directories with shorter paths, say:
c:\spark for Apache Spark
c:\zeppelin for Apache Zeppelin
And start over.
Moreover, you don't have to install Spark separately to use Zeppelin as it comes with Spark pre-installed/bundled. That gives you less to worry, i.e. no need for SPARK_HOME.
You've got them as follows:
C:\Users\anbarasu.r\Desktop\Archive\spark-2.0.0-bin-hadoop2.6 for Apache Spark
C:/Users/anbarasu.r/Desktop/Archive/zeppelin-0.6.2-bin-all/zeppelin-0.6.2-bin-all for Apache Zepplin
which (as I've just experienced) takes too much space out of the available space for paths on Windows.

Accumulo getting stucked and not starting

I've been trying to install Accumulo and try it for a few days but it gets stocked before even starting. I ended up using HortonWorks Sandbox that comes with Hadoop and Zookeeper installed.
I followed the instruciton on Accmulo setup page and changes the configuration as below:
[root#sandbox ~]# vi /etc/accumulo/conf/accumulo-env.sh
if [ -z "$ACCUMULO_HOME" ]
then
test -z "$ACCUMULO_HOME" && export ACCUMULO_HOME=/usr/hdp/2.2.0.0-2041/accumulo
fi
if [ -z "$HADOOP_HOME" ]
then
test -z "$HADOOP_PREFIX" && export HADOOP_PREFIX=/usr/hdp/current/hadoop-client
else
HADOOP_PREFIX="$HADOOP_HOME"
unset HADOOP_HOME
fi
# hadoop-1.2:
# test -z "$HADOOP_CONF_DIR" && export HADOOP_CONF_DIR="$HADOOP_PREFIX/conf"
# hadoop-2.0:
test -z "$HADOOP_CONF_DIR" && export HADOOP_CONF_DIR="$HADOOP_PREFIX/etc/hadoop/conf"
test -z "$JAVA_HOME" && export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
test -z "$ZOOKEEPER_HOME" && export ZOOKEEPER_HOME=/usr/hdp/current/zookeeper-client
test -z "$ACCUMULO_LOG_DIR" && export ACCUMULO_LOG_DIR=$ACCUMULO_HOME/logs
if [ -f ${ACCUMULO_CONF_DIR}/accumulo.policy ]
then
POLICY="-Djava.security.manager -Djava.security.policy=${ACCUMULO_CONF_DIR}/accumulo.policy"
fi
Using this configuration, I was able to start Accumulo successfully:
[root#sandbox ~]# /usr/hdp/2.2.0.0-2041/accumulo/bin/start-all.sh
Starting monitor on localhost
WARN : Max open files on localhost is 1024, recommend 32768
Starting tablet servers .... done
Starting tablet server on localhost
WARN : Max open files on localhost is 1024, recommend 32768
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/reza/accumulo-1.6.1/lib/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-02-09 05:30:21,742 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on system reset or power loss
2015-02-09 05:30:21,760 [server.Accumulo] INFO : Attempting to talk to zookeeper
2015-02-09 05:30:21,920 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS
2015-02-09 05:30:22,195 [server.Accumulo] INFO : Connected to HDFS
2015-02-09 05:30:22,207 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on system reset or power loss
Starting master on localhost
WARN : Max open files on localhost is 1024, recommend 32768
Starting garbage collector on localhost
WARN : Max open files on localhost is 1024, recommend 32768
Starting tracer on localhost
WARN : Max open files on localhost is 1024, recommend 32768
I was also able to initialize Accumulo successfully:
[root#sandbox ~]# /usr/hdp/2.2.0.0-2041/accumulo/bin/accumulo init
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/reza/accumulo-1.6.1/lib/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-02-09 04:40:56,225 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on system reset or power loss
2015-02-09 04:40:56,229 [init.Initialize] INFO : Hadoop Filesystem is hdfs://sandbox.hortonworks.com:8020
2015-02-09 04:40:56,232 [init.Initialize] INFO : Accumulo data dirs are [hdfs://sandbox.hortonworks.com:8020/accumulo]
2015-02-09 04:40:56,232 [init.Initialize] INFO : Zookeeper server is localhost:2181
2015-02-09 04:40:56,232 [init.Initialize] INFO : Checking if Zookeeper is available. If this hangs, then you need to make sure zookeeper is running
Warning!!! Your instance secret is still set to the default, this is not secure. We highly recommend you change it.
You can change the instance secret in accumulo by using:
bin/accumulo org.apache.accumulo.server.util.ChangeSecret oldPassword newPassword.
You will also need to edit your secret in your configuration file by adding the property instance.secret to your conf/accumulo-site.xml. Without this accumulo will not operate correctly
Instance name : reza
Instance name "reza" exists. Delete existing entry from zookeeper? [Y/N] : N
Instance name : reza2
Enter initial password for root (this may not be applicable for your security setup): ****
Confirm initial password for root: ****
2015-02-09 04:42:32,497 [Configuration.deprecation] INFO : dfs.replication.min is deprecated. Instead, use dfs.namenode.replication.min
2015-02-09 04:42:32,509 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on system reset or power loss
2015-02-09 04:42:33,208 [Configuration.deprecation] INFO : dfs.block.size is deprecated. Instead, use dfs.blocksize
2015-02-09 04:42:33,964 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.security.handler.ZKAuthorizor
2015-02-09 04:42:33,969 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.security.handler.ZKAuthenticator
2015-02-09 04:42:33,973 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.security.handler.ZKPermHandler
However, when I try to start an Accumulo shell to try it, it gets stucked without throwing any error:
[root#sandbox ~]# /usr/bin/accumulo shell --password reza
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/reza/accumulo-1.6.1/lib/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/current/hadoop-client/client/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-02-09 05:21:15,397 [impl.ServerClient] WARN : There are no tablet servers: check that zookeeper and accumulo are running.
It does not progress any further does not give me a shell to try. Any advise on how to fix this or what the problem is?
You probably have to specify the instance name and ZooKeeper host information. Launch the shell with --help to find the correct command-line options.
Note: You can debug the Accumulo shell by launching it with --debug on the command line.
Did you start Accumulo before trying to launch the shell?
/usr/hdp/2.2.0.0-2041/accumulo/bin/start-all.sh
And then
/usr/bin/accumulo shell --password reza

Resources