I get below error message when i start my flume agent:
17/10/16 19:33:17 ERROR node.AbstractConfigurationProvider: Sink hdfssink has been removed due to an error during configuration
java.lang.IllegalStateException: Sink hdfssink is not connected to a channel
at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:419)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:98)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
17/10/16 19:33:17 INFO node.AbstractConfigurationProvider: Channel loggerchannel connected to [logsource, loggersink]
17/10/16 19:33:17 INFO node.Application: Starting new configuration:{ sourceRunners:{logsource=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:logsource,state:IDLE} }} sinkRunners:{loggersink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#5a311ade counterGroup:{ name:null counters:{} } }} channels:{loggerchannel=org.apache.flume.channel.MemoryChannel{name: loggerchannel}} }
17/10/16 19:33:17 INFO node.Application: Starting Channel loggerchannel
17/10/16 19:33:17 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: loggerchannel: Successfully registered new MBean.
17/10/16 19:33:17 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: loggerchannel started
17/10/16 19:33:17 INFO node.Application: Starting Sink loggersink
17/10/16 19:33:17 INFO node.Application: Starting Source logsource
17/10/16 19:33:17 INFO source.ExecSource: Exec source starting with command:tail -F /opt/gen_logs/logs/access.log
it doesn’t write any files to hdfs sink. I did verify each & every line of the configuration file. below is my flume configuration file:
fmp.sources = logsource
fmp.sinks = loggersink hdfssink
fmp.channels = loggerchannel hdfschannel
fmp.sources.logsource.type=exec
fmp.sources.logsource.command = tail -F /opt/gen_logs/logs/access.log
fmp.sinks.loggersink.type=logger
fmp.sinks.hdfssink.type=hdfs
fmp.sinks.hdfssink.hdfs.path=hdfs://quickstart.cloudera:8020/user/cloudera/flume
fmp.channels.loggerchannel.type=memory
fmp.channels.loggerchannel.capacity=1000
fmp.channels.loggerchannel.transactioncapacity=100
fmp.channels.hdfschannel.type=file
fmp.channels.hdfschannel.capacity=1000
fmp.channels.hdfschannel.transactioncapacity=100
fmp.sources.logsource.channels = hdfschannel loggerchannel
fmp.sinks.loggersink.channel = loggerchannel
fmp.sinks.hdfssink.channel = hdfschannel
Related
DebeziumEngine looking for kafka topic eventhough i have not specified KafkaOffsetBackingStore for offset.storage
Reference : DebeziumEngine Config
Config
Configuration config = Configuration.create()
.with("name", "oracle_debezium_connector")
.with("connector.class", "io.debezium.connector.oracle.OracleConnector")
.with("offset.storage", "org.apache.kafka.connect.storage.FileOffsetBackingStore")
.with("offset.storage.file.filename", "/Users/dk/Documents/work/ACET/offset.dat")
.with("offset.flush.interval.ms", 2000)
.with("database.hostname", "localhost")
.with("database.port", "1521")
.with("database.user", "pravin")
.with("database.password", "*****")
.with("database.sid", "ORCLCDB")
.with("database.server.name", "mServer")
.with("database.out.server.name", "dbzxout")
.with("database.history", "io.debezium.relational.history.FileDatabaseHistory")
.with("database.history.file.filename", "/Users/dk/Documents/work/ACET/dbhistory.dat")
.with("topic.prefix","cycowner")
.with("database.dbname", "ORCLCDB")
.build();
DebeziumEngine
DebeziumEngine<ChangeEvent<String, String>> engine = DebeziumEngine.create(Json.class)
.using(config.asProperties())
.using(connectorCallback)
.using(completionCallback)
.notifying(record -> {
System.out.println(record);
})
.build();
Error :
2022-10-29T16:06:16,457 ERROR [pool-2-thread-1] i.d.c.Configuration: The 'schema.history.internal.kafka.topic' value is invalid: A value is required
2022-10-29T16:06:16,457 ERROR [pool-2-thread-1] i.d.c.Configuration: The 'schema.history.internal.kafka.bootstrap.servers' value is invalid: A value is required**
2022-10-29T16:06:16,458 INFO [pool-2-thread-1] i.d.c.c.BaseSourceTask: Stopping down connector
2022-10-29T16:06:16,463 INFO [pool-3-thread-1] i.d.j.JdbcConnection: Connection gracefully closed
2022-10-29T16:06:16,465 INFO [pool-2-thread-1] o.a.k.c.s.FileOffsetBackingStore: Stopped FileOffsetBackingStore
connector stopped successfully
---------------------------------------------------
success status: false, message : Unable to initialize and start connector's task class 'io.debezium.connector.oracle.OracleConnectorTask' with config: {connector.class=io.debezium.connector.oracle.OracleConnector, database.history.file.filename=/Users/dkuma416/Documents/work/ACET/dbhistory.dat, database.user=pravin, database.dbname=ORCLCDB, offset.storage=org.apache.kafka.connect.storage.FileOffsetBackingStore, database.server.name=mServer, offset.flush.timeout.ms=5000, errors.retry.delay.max.ms=10000, database.port=1521, database.sid=ORCLCDB, offset.flush.interval.ms=2000, topic.prefix=cycowner, offset.storage.file.filename=/Users/dkuma416/Documents/work/ACET/offset.dat, errors.max.retries=-1, database.hostname=localhost, database.password=********, name=oracle_debezium_connector, database.out.server.name=dbzxout, errors.retry.delay.initial.ms=300, value.converter=org.apache.kafka.connect.json.JsonConverter, key.converter=org.apache.kafka.connect.json.JsonConverter, database.history=io.debezium.relational.history.MemoryDatabaseHistory}, **Error: Error configuring an instance of KafkaSchemaHistory; check the logs for details**
I am getting lot of Socket Read Timout when I try to read a response from my
post request. I am using OKHttpClient verion 3.11.0 with the following configuration:
#Bean
OkHttpClient okHttpClient() {
def loggingInterceptor = new HttpLoggingInterceptor()
loggingInterceptor.setLevel(HttpLoggingInterceptor.Level.BODY)
Dispatcher dispatcher = new Dispatcher()
dispatcher.setMaxRequests(200)
dispatcher.setMaxRequestsPerHost(200)
return new OkHttpClient()
.newBuilder()
.eventListenerFactory(PrintingEventListener.FACTORY)
.retryOnConnectionFailure(true)
.connectTimeout(30000, TimeUnit.MILLISECONDS)
.readTimeout(30000, TimeUnit.MILLISECONDS)
.dispatcher(dispatcher)
.connectionPool(new ConnectionPool(200, 30, TimeUnit.SECONDS))
.addNetworkInterceptor(loggingInterceptor)
.addInterceptor(loggingInterceptor).build()
}
The code to process the request and response is given below:
Response r = client.newCall(request).execute()
r.withCloseable { response ->
def returnable
if (response.header('Content-Type')?.contains('application/json')) {
def body = response.body().string()
if (body.trim().isEmpty()) {
returnable = [:]
} else {
try {
def result = new JsonSlurper().parseText(body)
return result
} catch (any) {
log.error('Failed to parse json response: ' + body, any.message)
throw any
}
}
} else if (response.header('Content-Type')?.contains('image/jpeg')) {
returnable = response.body().bytes()
} else {
returnable = response.body().string()
}
return returnable
}
I add event listener, I see that the httpclient call responseBodyStart event and it hangs there and when timeout seconds reached, the call failed and throw Socket Read timeout exception. Is there anything missing in my configuration?
Event listener shows
responseHeaderStart or responseBodyStart followed by connectionReleased after specified timeout(30s) reached?
Please find the event trace and exception trace below:
INFO 11097 : 2.1701E-5 -- callStart
INFO 11097 : 1.44572E-4 -- connectionAcquired
INFO 11097 : 0.001036047 -- requestHeadersStart
INFO 11097 : 0.001064492 -- requestHeadersEnd
INFO 11097 : 0.001084433 -- requestBodyStart
INFO 11097 : 0.001103787 -- requestBodyEnd
INFO 11097 : 0.001279736 -- responseHeadersStart
INFO 11097 : 1.007175496 -- responseHeadersEnd
INFO 11097 : 1.007247928 -- responseBodyStart
INFO 11097 : 31.082725087 -- connectionReleased
INFO 11097 : 31.083717147 -- callFailed
INFO 11097 : 31.092341876 --
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:940)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at okio.Okio$2.read(Okio.java:140)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:237)
at okio.RealBufferedSource.request(RealBufferedSource.java:68)
at okio.RealBufferedSource.require(RealBufferedSource.java:61)
at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:304)
at okhttp3.internal.http1.Http1Codec$ChunkedSource.readChunkSize(Http1Codec.java:469)
at okhttp3.internal.http1.Http1Codec$ChunkedSource.read(Http1Codec.java:449)
at okio.RealBufferedSource.request(RealBufferedSource.java:68)
at okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.java:241)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.java:213)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200)
at okhttp3.RealCall.execute(RealCall.java:77)
at okhttp3.Call$execute.call(Unknown Source)
I have created a Kafka topic with 5 partitions. And I am using createStream receiver API like following. But somehow only one receiver is getting the input data. Rest of receivers are not processign anything. Can you please help?
JavaPairDStream<String, String> messages = null;
if(sparkStreamCount > 0){
// We create an input DStream for each partition of the topic, unify those streams, and then repartition the unified stream.
List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<JavaPairDStream<String, String>>(sparkStreamCount);
for (int i = 0; i < sparkStreamCount; i++) {
kafkaStreams.add( KafkaUtils.createStream(jssc, contextVal.getString(KAFKA_ZOOKEEPER), contextVal.getString(KAFKA_GROUP_ID), kafkaTopicMap));
}
messages = jssc.union(kafkaStreams.get(0), kafkaStreams.subList(1, kafkaStreams.size()));
}
else{
messages = KafkaUtils.createStream(jssc, contextVal.getString(KAFKA_ZOOKEEPER), contextVal.getString(KAFKA_GROUP_ID), kafkaTopicMap);
}
After adding the changes I am getting following exceptions:
INFO : org.apache.spark.streaming.kafka.KafkaReceiver - Connected to localhost:2181
INFO : org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Stopping receiver with message: Error starting receiver 0: java.lang.AssertionError: assertion failed
INFO : org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Called receiver onStop
INFO : org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Deregistering receiver 0
ERROR: org.apache.spark.streaming.scheduler.ReceiverTracker - Deregistered receiver for stream 0: Error starting receiver 0 - java.lang.AssertionError: assertion failed
at scala.Predef$.assert(Predef.scala:165)
at kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:36)
at kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:34)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.consumer.TopicCount$class.makeConsumerThreadIdsPerTopic(TopicCount.scala:34)
at kafka.consumer.StaticTopicCount.makeConsumerThreadIdsPerTopic(TopicCount.scala:100)
at kafka.consumer.StaticTopicCount.getConsumerThreadIdsPerTopic(TopicCount.scala:104)
at kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:198)
at kafka.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:138)
at org.apache.spark.streaming.kafka.KafkaReceiver.onStart(KafkaInputDStream.scala:111)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:148)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:130)
at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:542)
at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:532)
at org.apache.spark.SparkContext$$anonfun$37.apply(SparkContext.scala:1986)
at org.apache.spark.SparkContext$$anonfun$37.apply(SparkContext.scala:1986)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
INFO : org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Stopped receiver 0
INFO : org.apache.spark.streaming.receiver.BlockGenerator - Stopping BlockGenerator
INFO : org.apache.spark.streaming.util.RecurringTimer - Stopped timer for BlockGenerator after time 1473964037200
INFO : org.apache.spark.streaming.receiver.BlockGenerator - Waiting for block pushing thread to terminate
INFO : org.apache.spark.streaming.receiver.BlockGenerator - Pushing out the last 0 blocks
INFO : org.apache.spark.streaming.receiver.BlockGenerator - Stopped block pushing thread
INFO : org.apache.spark.streaming.receiver.BlockGenerator - Stopped BlockGenerator
INFO : org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Waiting for receiver to be stopped
ERROR: org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Stopped receiver with error: java.lang.AssertionError: assertion failed
ERROR: org.apache.spark.executor.Executor - Exception in task 0.0 in stage 29.0
There is one issue with the above code. The kafkaTopicMap parameter in KafkaUtils.createStream method specify Map of (topic_name -> numPartitions) to consume. Each partition is consumed in its own thread.
Try the below code:
JavaPairDStream<String, String> messages = null;
int sparkStreamCount = 5;
Map<String, Integer> kafkaTopicMap = new HashMap<String, Integer>();
if (sparkStreamCount > 0) {
List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<JavaPairDStream<String, String>>(sparkStreamCount);
for (int i = 0; i < sparkStreamCount; i++) {
kafkaTopicMap.put(topic, i+1);
kafkaStreams.add(KafkaUtils.createStream(streamingContext, contextVal.getString(KAFKA_ZOOKEEPER), contextVal.getString(KAFKA_GROUP_ID), kafkaTopicMap));
}
messages = streamingContext.union(kafkaStreams.get(0), kafkaStreams.subList(1, kafkaStreams.size()));
} else {
messages = KafkaUtils.createStream(streamingContext, contextVal.getString(KAFKA_ZOOKEEPER), contextVal.getString(KAFKA_GROUP_ID), kafkaTopicMap);
}
I was trying to test my Hbase Code using HbaseTestingUtility. Everytime I started my mini-cluster using the code snippet below, I was getting an exception.
public void startCluster()
{
File workingDirectory = new File("./");
Configuration conf = new Configuration();
System.setProperty("test.build.data", workingDirectory.getAbsolutePath());
conf.set("test.build.data", new File(workingDirectory, "zookeeper").getAbsolutePath());
conf.set("fs.default.name", "file:///");
conf.set("zookeeper.session.timeout", "180000");
conf.set("hbase.zookeeper.peerport", "2888");
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.addResource(new Path("conf/hbase-site1.xml"));
try
{
masterDir = new File(workingDirectory, "hbase");
conf.set(HConstants.HBASE_DIR, masterDir.toURI().toURL().toString());
}
catch (MalformedURLException e1)
{
logger.error(e1.getMessage());
}
Configuration hbaseConf = HBaseConfiguration.create(conf);
utility = new HBaseTestingUtility(hbaseConf);
// Change permission for dfs.data.dir, please refer
// https://issues.apache.org/jira/browse/HBASE-5711 for more details.
try
{
Process process = Runtime.getRuntime().exec("/bin/sh -c umask");
BufferedReader br = new BufferedReader(new InputStreamReader(process.getInputStream()));
int rc = process.waitFor();
if (rc == 0)
{
String umask = br.readLine();
int umaskBits = Integer.parseInt(umask, 8);
int permBits = 0777 & ~umaskBits;
String perms = Integer.toString(permBits, 8);
logger.info("Setting dfs.datanode.data.dir.perm to " + perms);
utility.getConfiguration().set("dfs.datanode.data.dir.perm", perms);
}
else
{
logger.warn("Failed running umask command in a shell, nonzero return value");
}
}
catch (Exception e)
{
// ignore errors, we might not be running on POSIX, or "sh" might
// not be on the path
logger.warn("Couldn't get umask", e);
}
if (!checkIfServerRunning())
{
hTablePool = new HTablePool(conf, 1);
try
{
zkCluster = new MiniZooKeeperCluster(conf);
zkCluster.setDefaultClientPort(2181);
zkCluster.setTickTime(18000);
zkDir = new File(utility.getClusterTestDir().toString());
zkCluster.startup(zkDir);
utility.setZkCluster(zkCluster);
utility.startMiniCluster();
utility.getHBaseCluster().startMaster();
}
catch (Exception e)
{
e.printStackTrace();
logger.error(e.getMessage());
throw new RuntimeException(e);
}
}
}
I got the exception as follows.
2013-09-10 15:26:26 INFO ClientCnxn:849 - Socket connection established to localhost/127.0.0.1:2181, initiating session
2013-09-10 15:26:26 INFO ZooKeeperServer:839 - Client attempting to establish new session at /127.0.0.1:45934
2013-09-10 15:26:26 INFO ZooKeeperServer:595 - Established session 0x141074cd6150002 with negotiated timeout 180000 for client /127.0.0.1:45934
2013-09-10 15:26:26 INFO ClientCnxn:1207 - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x141074cd6150002, negotiated timeout = 180000
2013-09-10 15:26:26 INFO HBaseRPC:289 - Server at localhost/127.0.0.1:42926 could not be reached after 1 tries, giving up.
2013-09-10 15:26:26 WARN AssignmentManager:1714 - Failed assignment of -ROOT-,,0.70236052 to localhost,42926,1378806982623, trying to assign elsewhere instead; retry=0
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:42926 after attempts=1
at org.apache.hadoop.hbase.ipc.HBaseRPC.handleConnectionException(HBaseRPC.java:291)
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:259)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1305)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1261)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1248)
at org.apache.hadoop.hbase.master.ServerManager.getServerConnection(ServerManager.java:550)
at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:483)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1664)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1387)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1362)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1357)
at org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2236)
at org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:654)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:551)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:362)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:525)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:416)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:462)
at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1150)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1000)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
at com.sun.proxy.$Proxy20.getProtocolVersion(Unknown Source)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:183)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:335)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:312)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:364)
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:236)
... 14 more
2013-09-10 15:26:26 WARN AssignmentManager:1736 - Unable to find a viable location to assign region -ROOT-,,0.70236052
2013-09-10 15:27:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:27:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:28:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:28:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:29:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:29:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:29:42 ERROR MiniHBaseCluster:201 - Error starting cluster
java.lang.RuntimeException: Master not initialized after 200 seconds
at org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:206)
at org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:420)
at org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:196)
at org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:76)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:635)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:609)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:557)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:526)
at HBaseTesting.startCluster(HBaseTesting.java:131)
at HBaseTesting.main(HBaseTesting.java:62)
2013-09-10 15:29:42 INFO HMaster:1635 - Cluster shutdown requested
2013-09-10 15:29:42 INFO HRegionServer:1666 - STOPPED: Shutdown requested
2013-09-10 15:29:42 INFO HBaseServer:1651 - Stopping server on 42926
Could anyone help me with the solution.
Here's my solution.
I had to update my /etc/hosts file.
the 2 entries of interests are:
127.0.0.1 localhost
127.0.1.1 myhostname
I had to change the IP that myhostname is pointed to also point to
127.0.0.1.
Once my /etc/hosts was updated to look like this:
127.0.0.1 localhost
127.0.0.1 myhostname
The code started working.
(This is assuming a Linux server. replace /etc/hosts with the equivalent file for your operating system )
http://en.wikipedia.org/wiki/Hosts_(file)
Adding guava dependency to Gradle file worked for me.
compile group: 'com.google.guava', name: 'guava', version: '14.0'
I am trying to run distributed shell example on YARN cluster.
#Test
public void realClusterTest() throws Exception {
System.setProperty("HADOOP_USER_NAME", "hdfs");
String[] args = {
"--jar",
APPMASTER_JAR,
"--num_containers",
"1",
"--shell_command",
"ls",
"--master_memory",
"512",
"--container_memory",
"128"
};
LOG.info("Initializing DS Client");
Client client = new Client(new Configuration());
boolean initSuccess = client.init(args);
Assert.assertTrue(initSuccess);
LOG.info("Running DS Client");
boolean result = client.run();
LOG.info("Client run completed. Result=" + result);
Assert.assertTrue(result);
}
But it fails with:
2013-09-17 11:45:28,338 INFO [main] distributedshell.Client (Client.java:monitorApplication(600)) - Got application report from ASM for, appId=11, clientToAMToken=null, appDiagnostics=Application application_1379338026167_0011 failed 2 times due to AM Container for appattempt_1379338026167_0011_000002 exited with exitCode: 1 due to: Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:458)
at org.apache.hadoop.util.Shell.run(Shell.java:373)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:578)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
................
.Failing this attempt.. Failing the application., appMasterHost=N/A, appQueue=default, appMasterRpcPort=0, appStartTime=1379407525237, yarnAppState=FAILED, distributedFinalState=FAILED, appTrackingUrl=ip-10-232-149-222.us-west-2.compute.internal:8088/proxy/application_1379338026167_0011/, appUser=hdfs
Here is what I see in server logs:
2013-09-17 08:45:26,870 WARN nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(213)) - Exception from container-launch with container ID: container_1379338026167_0011_02_000001 and exit code: 1
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:458)
at org.apache.hadoop.util.Shell.run(Shell.java:373)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:578)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:258)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:74)
The question is how can I get more details to identify what is going wrong.
PS: we are using HDP 2.0.5