Debezium - Oracle Connector - Service Not Starting - oracle

DebeziumEngine looking for kafka topic eventhough i have not specified KafkaOffsetBackingStore for offset.storage
Reference : DebeziumEngine Config
Config
Configuration config = Configuration.create()
.with("name", "oracle_debezium_connector")
.with("connector.class", "io.debezium.connector.oracle.OracleConnector")
.with("offset.storage", "org.apache.kafka.connect.storage.FileOffsetBackingStore")
.with("offset.storage.file.filename", "/Users/dk/Documents/work/ACET/offset.dat")
.with("offset.flush.interval.ms", 2000)
.with("database.hostname", "localhost")
.with("database.port", "1521")
.with("database.user", "pravin")
.with("database.password", "*****")
.with("database.sid", "ORCLCDB")
.with("database.server.name", "mServer")
.with("database.out.server.name", "dbzxout")
.with("database.history", "io.debezium.relational.history.FileDatabaseHistory")
.with("database.history.file.filename", "/Users/dk/Documents/work/ACET/dbhistory.dat")
.with("topic.prefix","cycowner")
.with("database.dbname", "ORCLCDB")
.build();
DebeziumEngine
DebeziumEngine<ChangeEvent<String, String>> engine = DebeziumEngine.create(Json.class)
.using(config.asProperties())
.using(connectorCallback)
.using(completionCallback)
.notifying(record -> {
System.out.println(record);
})
.build();
Error :
2022-10-29T16:06:16,457 ERROR [pool-2-thread-1] i.d.c.Configuration: The 'schema.history.internal.kafka.topic' value is invalid: A value is required
2022-10-29T16:06:16,457 ERROR [pool-2-thread-1] i.d.c.Configuration: The 'schema.history.internal.kafka.bootstrap.servers' value is invalid: A value is required**
2022-10-29T16:06:16,458 INFO [pool-2-thread-1] i.d.c.c.BaseSourceTask: Stopping down connector
2022-10-29T16:06:16,463 INFO [pool-3-thread-1] i.d.j.JdbcConnection: Connection gracefully closed
2022-10-29T16:06:16,465 INFO [pool-2-thread-1] o.a.k.c.s.FileOffsetBackingStore: Stopped FileOffsetBackingStore
connector stopped successfully
---------------------------------------------------
success status: false, message : Unable to initialize and start connector's task class 'io.debezium.connector.oracle.OracleConnectorTask' with config: {connector.class=io.debezium.connector.oracle.OracleConnector, database.history.file.filename=/Users/dkuma416/Documents/work/ACET/dbhistory.dat, database.user=pravin, database.dbname=ORCLCDB, offset.storage=org.apache.kafka.connect.storage.FileOffsetBackingStore, database.server.name=mServer, offset.flush.timeout.ms=5000, errors.retry.delay.max.ms=10000, database.port=1521, database.sid=ORCLCDB, offset.flush.interval.ms=2000, topic.prefix=cycowner, offset.storage.file.filename=/Users/dkuma416/Documents/work/ACET/offset.dat, errors.max.retries=-1, database.hostname=localhost, database.password=********, name=oracle_debezium_connector, database.out.server.name=dbzxout, errors.retry.delay.initial.ms=300, value.converter=org.apache.kafka.connect.json.JsonConverter, key.converter=org.apache.kafka.connect.json.JsonConverter, database.history=io.debezium.relational.history.MemoryDatabaseHistory}, **Error: Error configuring an instance of KafkaSchemaHistory; check the logs for details**

Related

Quickfix/j doesn't attempt to connect to the specified socket

I am using QuickFix/J 2.3.1 (same results with 2.3.0). I have a rather straightforward spring boot application, where a FIX service is one of the beans. It creates an initiator. Until recently everything worked fine. Suddenly I stumbled into the following issue - quickfix doesn't seem to even attempt to open a connection to the specified host:port. I do suspect that this can be something to do with my code, but so far I don't have a clue on how to figure out what is going on.
Here is the initialisation code (Kotlin):
#PostConstruct
override fun start() {
logger.info("Using config file {}", config.tradingServiceConfig.quickFixConfigFile)
val sessionSettings = SessionSettings(config.tradingServiceConfig.quickFixConfigFile)
val messageStoreFactory = FileStoreFactory(sessionSettings)
val messageFactory = DefaultMessageFactory()
initiator = SocketInitiator(
this,
messageStoreFactory,
sessionSettings,
SLF4JLogFactory(sessionSettings),
messageFactory
)
logger.info("Calling initiator start")
initiator?.start()
logger.info("Initiator startup finished")
}
Here is the corresponding piece of log:
2021-12-12 22:20:48.962 INFO 94182 --- [ restartedMain] i.s.trading.gateway.service.FixService : Calling initiator start
2021-12-12 22:20:49.157 INFO 94182 --- [ restartedMain] quickfix.DefaultSessionSchedule : [FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT] daily, 08:00:00-UTC - 08:45:00-UTC
2021-12-12 22:20:49.180 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Session FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT schedule is daily, 08:00:00-UTC - 08:45:00-UTC
2021-12-12 22:20:49.181 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Session state is not current; resetting FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT
2021-12-12 22:20:49.185 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Created session: FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT
2021-12-12 22:20:49.186 INFO 94182 --- [ restartedMain] i.s.t.gateway.service.FixServiceBase : New session started: FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT}
2021-12-12 22:20:49.193 INFO 94182 --- [ restartedMain] quickfix.mina.NetworkingOptions : Socket option: SocketTcpNoDelay=true
2021-12-12 22:20:49.194 INFO 94182 --- [ restartedMain] quickfix.mina.NetworkingOptions : Socket option: SocketSynchronousWrites=false
2021-12-12 22:20:49.194 INFO 94182 --- [ restartedMain] quickfix.mina.NetworkingOptions : Socket option: SocketSynchronousWriteTimeout=30000
2021-12-12 22:20:49.276 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Configured socket addresses for session: [localhost/127.0.0.1:10669]
2021-12-12 22:20:49.277 INFO 94182 --- [ restartedMain] quickfix.SocketInitiator : SessionTimer started
2021-12-12 22:20:49.280 INFO 94182 --- [ restartedMain] i.s.trading.gateway.service.FixService : Initiator startup finished
2021-12-12 22:20:49.280 INFO 94182 --- [ssage Processor] quickfix.SocketInitiator : Started QFJ Message Processor
No other FIX, including quickfix, messages appear in the log. And I can see via netstat that not even an attempt is made to connect to the specified socket. I tried stopping the process in debugger to see what was going on, but couldn't see anything obvious.
As I said before, this used to work just fine a week or so ago when I last tried, that's why I'm so puzzled.
Any thoughts on how to debug the issue?
You seem to have configured the initiator to connect to the acceptor on a daily basis, between 08:00:00-UTC and 08:45:00-UTC.
Try increasing the date range (i. e. 08:00:00 to 18:00:00) and see if you get connected.
PS: If you're using quickfixj and Spring, have a look at QuickFixJ Spring Boot starter in Github https://github.com/esanchezros/quickfixj-spring-boot-starter

Apache Camel Spring Boot - Graceful shutdown of the application after processing the routes

I have couple of routes (route 1 and route 2) in my Spring Boot application. I have been researching how to gracefully shutdown the application after processing both the routes. I have referred the documentation (https://camel.apache.org/manual/latest/graceful-shutdown.html) but couldn't successfully achieve what I needed. Maybe my understanding is wrong.
Below are my two routes
Route 1
from("timer://runOnce?repeatCount=1")
.to("{{sql.selectAll}}")
......... SOME PROCESSING
.to("direct:checkStatus")
Route 2
from("direct:checkStatus")
.delay(5000)
.loopDoWhile(CONDITION)
.process(DO_SOMETHING)
.end()
.to("jpa:com.pqr.MyClass)
.stop();
I have tried all these options
1. Automatic shutdown after 60 seconds
camel.springboot.duration-max-seconds = 60
It does GRACEFULLY shutdown the 2 routes but then WARNs about FORCEFUL shutdown ExecutorsService and also it doesn't stop the main thread to stop the application.
2020-03-01 18:28:25.507 WARN 30279 --- [otTerminateTask] o.a.c.i.e.BaseExecutorServiceManager : Forcing shutdown of ExecutorService: org.apache.camel.util.concurrent.SizedScheduledExecutorService#17fbfb02[CamelSpringBootTerminateTask] due first await termination elapsed.
2020-03-01 18:28:25.507 WARN 30279 --- [otTerminateTask] o.a.c.i.e.BaseExecutorServiceManager : Forcing shutdown of ExecutorService: org.apache.camel.util.concurrent.SizedScheduledExecutorService#17fbfb02[CamelSpringBootTerminateTask] due interrupted.
2020-03-01 18:28:25.508 INFO 30279 --- [otTerminateTask] o.a.c.i.e.BaseExecutorServiceManager : Shutdown of ExecutorService: org.apache.camel.util.concurrent.SizedScheduledExecutorService#17fbfb02[CamelSpringBootTerminateTask] is shutdown: true and terminated: false took: 10.004 seconds.
2020-03-01 18:28:25.508 WARN 30279 --- [otTerminateTask] o.a.c.i.e.BaseExecutorServiceManager : Forced shutdown of 1 ExecutorService's which has not been shutdown properly (acting as fail-safe)
2020-03-01 18:28:25.508 WARN 30279 --- [otTerminateTask] o.a.c.i.e.BaseExecutorServiceManager : forced -> org.apache.camel.util.concurrent.SizedScheduledExecutorService#17fbfb02[CamelSpringBootTerminateTask]
2. Initiate shutdown from the Route2
from("direct:checkStatus")
.delay(5000)
.loopDoWhile(CONDITION)
.process(DO_SOMETHING)
.end()
.to("jpa:com.pqr.MyClass)
.process(exchange -> {
exchange.getContext().getRouteController().stopRoute("route1");
exchange.getContext().getRouteController().stopRoute("route2");
System.out.println("Route1 -->"+exchange.getContext().getRouteController().getRouteStatus("route1"));
System.out.println("Route2 -->"+exchange.getContext().getRouteController().getRouteStatus("route2"));
exchange.getContext().shutdown();
});
"route1" is gracefully stopped but "route2" fails to be gracefully stopped with below message and waits for default timeout (300s).
2020-03-01 18:35:29.113 INFO 30504 --- [read #4 - Delay] o.a.c.i.engine.DefaultShutdownStrategy : Starting to graceful shutdown 1 routes (timeout 300 seconds)
2020-03-01 18:35:29.116 INFO 30504 --- [ - ShutdownTask] o.a.c.i.engine.DefaultShutdownStrategy : Route: route1 shutdown complete, was consuming from: timer://runOnce?repeatCount=1
2020-03-01 18:35:29.116 INFO 30504 --- [read #4 - Delay] o.a.c.i.engine.DefaultShutdownStrategy : Graceful shutdown of 1 routes completed in 0 seconds
2020-03-01 18:35:29.117 INFO 30504 --- [read #4 - Delay] o.a.c.s.boot.SpringBootCamelContext : Route: route1 is stopped, was consuming from: timer://runOnce?repeatCount=1
2020-03-01 18:35:29.117 INFO 30504 --- [read #4 - Delay] o.a.c.i.engine.DefaultShutdownStrategy : Starting to graceful shutdown 1 routes (timeout 300 seconds)
2020-03-01 18:35:29.118 INFO 30504 --- [ - ShutdownTask] o.a.c.i.engine.DefaultShutdownStrategy : Waiting as there are still 1 inflight and pending exchanges to complete, timeout in 300 seconds. Inflights per route: [route2 = 1]
It looks like there is a pending exchange message to be consumed. Do I need to manually clear/consume the exchange message in order to clear and facilitate a graceful shutdown?
Either option doesn't stop the main application. Do I have to write a custom Shutdown strategy instead of DefaultShutdownStrategy to achieve this? Can someone kindly point to an example to shut down the Spring Boot application after completion of the routes? Thanks in advance!!!
Did you try to use exchange.getContext().stop() to stop main application?
To force stop route without waiting for default timeout you can use exchange.getContext().stopRoute(routeId, 1L, TimeUnit.SECONDS); or set your timeout in seconds context.getShutdownStrategy().setTimeout(30);
You have to stop the currently running route from a new thread. The onCompletion() DSL is to make sure every message has been processed.
The attached code is in Kotlin, but it should be easy to transfer it to Java:
fromF(route).id(routeId)
.process(someProcessor)
.to("jdbc:dataSource")
.onCompletion()
.choice().`when`(exchangeProperty("CamelBatchComplete"))
.process(object : Processor {
override fun process(exchange: Exchange) {
Thread {
try {
exchange.context.routeController.stopRoute(routeId)
exchange.context.stop()
} catch (e: Exception) {
throw RuntimeException(e)
}
}.start()
}
}
)
// must use end to denote the end of the onCompletion route
.end()
If you want to stop the entire application, you can use this class and add a call of shutdownManager.initiateShutdown() after the exchange.context.stop().
#Component
class ShutdownManager {
companion object {
val logger = LoggerFactory.getLogger(ShutdownManager::class.java)
}
#Autowired
private val appContext: ApplicationContext? = null
fun initiateShutdown(returnCode: Int) {
logger.info("Shutting down with a Shutdown manager")
SpringApplication.exit(appContext, ExitCodeGenerator { returnCode })
System.exit(returnCode)
}
}

Storm topology shuts down in Local cluster after running for few seconds

I have a very basic topology. Starting with KafkaSpout, it has 3 bolts. First bolt is CassandraWriterBolt to write data in Cassandra, remaining 2 other bolts read old data from Cassandra create another set of data by using new and old data and again insert that data into Cassandra.
I am running that topology in LocalCluster during development. It runs for few seconds and then it starts shutting down worker, executor etc. Finally it fails with Cassandra driver related exception -
java.lang.IllegalStateException: Could not send request, session is closed
at com.datastax.driver.core.SessionManager.execute(SessionManager.java:696) ~[cassandra-driver-core-3.6.0.jar:na]
Other logs are -
[er Executor - 1] o.a.s.s.org.apache.zookeeper.ZooKeeper : Session: 0x100166ad36d0024 closed
[.0/0.0.0.0:2000] o.a.s.s.o.a.z.server.NIOServerCnxn : Unable to read additional data from client sessionid 0x100166ad36d0024, likely client has closed socket
[.0/0.0.0.0:2000] o.a.s.s.o.a.z.server.NIOServerCnxn : Closed socket connection for client /0:0:0:0:0:0:0:1:63890 which had sessionid 0x100166ad36d0024
[- 1-EventThread] o.a.s.s.org.apache.zookeeper.ClientCnxn : EventThread shut down for session: 0x100166ad36d0024
[tor-Framework-0] o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl : backgroundOperationsLoop exiting
[:0 cport:2000):] o.a.s.s.o.a.z.s.PrepRequestProcessor : Processed session termination for sessionid: 0x100166ad36d0021
[er Executor - 4] o.a.s.s.org.apache.zookeeper.ZooKeeper : Session: 0x100166ad36d0021 closed
[.0/0.0.0.0:2000] o.a.s.s.o.a.z.server.NIOServerCnxn : Unable to read additional data from client sessionid 0x100166ad36d0021, likely client has closed socket
[.0/0.0.0.0:2000] o.a.s.s.o.a.z.server.NIOServerCnxn : Closed socket connection for client /0:0:0:0:0:0:0:1:63885 which had sessionid 0x100166ad36d0021
[- 4-EventThread] o.a.s.s.org.apache.zookeeper.ClientCnxn : EventThread shut down for session: 0x100166ad36d0021
[ SLOT_1027] org.apache.storm.ProcessSimulator : Begin killing process 1347f01d-7982-4141-9b9d-cac65a6e703d
[ SLOT_1027] org.apache.storm.daemon.worker.Worker : Shutting down worker forex-topology-1-1577152204 517f3306-5ad3-433b-82e1-b2d031779f0b 1027
[ SLOT_1027] org.apache.storm.daemon.worker.Worker : Terminating messaging context
[ SLOT_1027] org.apache.storm.daemon.worker.Worker : Shutting down executors
[ SLOT_1027] o.a.storm.executor.ExecutorShutdown : Shutting down executor __system:[-1, -1]
[xecutor[-1, -1]] org.apache.storm.utils.Utils : Async loop interrupted!
[ SLOT_1027] o.a.storm.executor.ExecutorShutdown : Shut down executor __system:[-1, -1]
[ SLOT_1027] o.a.storm.executor.ExecutorShutdown : Shutting down executor pairStrengthAccumulator:[8, 8]
[-executor[8, 8]] org.apache.storm.utils.Utils : Async loop interrupted!
[ SLOT_1027] o.a.s.cassandra.executor.AsyncExecutor : shutting down async handler executor
[ SLOT_1027] o.a.s.c.client.impl.DefaultClient : Try to close connection to cluster: cluster2
Following logs can be seen for 40 times -
[ main] o.a.storm.zookeeper.ClientZookeeper : Starting ZK Curator
[ main] o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl : Starting
[ main] o.a.s.s.org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=localhost:2000/storm sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#4bcaa195
[ main] o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl : Default schema
[localhost:2000)] o.a.s.s.org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2000. Will not attempt to authenticate using SASL (unknown error)
[ main] o.a.storm.zookeeper.ClientZookeeper : Starting ZK Curator
[ main] o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl : Starting
[.0/0.0.0.0:2000] o.a.s.s.o.a.z.s.NIOServerCnxnFactory : Accepted socket connection from /0:0:0:0:0:0:0:1:63756
[ main] o.a.s.s.org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=localhost:2000/storm sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#6bc24e72
[localhost:2000)] o.a.s.s.org.apache.zookeeper.ClientCnxn : Socket connection established to localhost/0:0:0:0:0:0:0:1:2000, initiating session
[.0/0.0.0.0:2000] o.a.s.s.o.a.z.server.ZooKeeperServer : Client attempting to establish new session at /0:0:0:0:0:0:0:1:63756
[ main] o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl : Default schema
[localhost:2000)] o.a.s.s.org.apache.zookeeper.ClientCnxn : Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2000, sessionid = 0x100166ad36d0001, negotiated timeout = 20000
[ SyncThread:0] o.a.s.s.o.a.z.server.ZooKeeperServer : Established session 0x100166ad36d0001 with negotiated timeout 20000 for client /0:0:0:0:0:0:0:1:63756
[localhost:2000)] o.a.s.s.org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2000. Will not attempt to authenticate using SASL (unknown error)
[ain-EventThread] o.a.s.s.o.a.c.f.s.ConnectionStateManager : State change: CONNECTED
[localhost:2000)] o.a.s.s.org.apache.zookeeper.ClientCnxn : Socket connection established to localhost/127.0.0.1:2000, initiating session
[.0/0.0.0.0:2000] o.a.s.s.o.a.z.s.NIOServerCnxnFactory : Accepted socket connection from /127.0.0.1:63759
[.0/0.0.0.0:2000] o.a.s.s.o.a.z.server.ZooKeeperServer : Client attempting to establish new session at /127.0.0.1:63759
[ SyncThread:0] o.a.s.s.o.a.z.server.ZooKeeperServer : Established session 0x100166ad36d0002 with negotiated timeout 20000 for client /127.0.0.1:63759
[localhost:2000)] o.a.s.s.org.apache.zookeeper.ClientCnxn : Session establishment complete on server localhost/127.0.0.1:2000, sessionid = 0x100166ad36d0002, negotiated timeout = 20000
[ain-EventThread] o.a.s.s.o.a.c.f.s.ConnectionStateManager : State change: CONNECTED
[ main] o.a.storm.validation.ConfigValidation : task.heartbeat.frequency.secs is a deprecated config please see class org.apache.storm.Config.TASK_HEARTBEAT_FREQUENCY_SECS for more information.
Your main method does this:
public static void main(String[] args) {
ApplicationContext springContext = SpringApplication.run(CurrencyStrengthCalculatorApplication.class, args);
StormTopology topology = SpringBasedTopologyBuilder.getInstance().buildStormTopologyUsingApplicationContext(springContext);
LOG.info("Topology created successfully. Now starting it .............");
new LocalCluster().submitTopology("forext-topology", ImmutableMap.of(), topology);
}
submitTopology isn't a blocking call, it just submits the topology and returns. If you want to keep the program running for a while, you need to put in a sleep after the submit. Once the main method returns, the LocalCluster will begin shutting down.

Flume errors : Sink hdfssink is not connected to a channel

I get below error message when i start my flume agent:
17/10/16 19:33:17 ERROR node.AbstractConfigurationProvider: Sink hdfssink has been removed due to an error during configuration
java.lang.IllegalStateException: Sink hdfssink is not connected to a channel
at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:419)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:98)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
17/10/16 19:33:17 INFO node.AbstractConfigurationProvider: Channel loggerchannel connected to [logsource, loggersink]
17/10/16 19:33:17 INFO node.Application: Starting new configuration:{ sourceRunners:{logsource=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:logsource,state:IDLE} }} sinkRunners:{loggersink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#5a311ade counterGroup:{ name:null counters:{} } }} channels:{loggerchannel=org.apache.flume.channel.MemoryChannel{name: loggerchannel}} }
17/10/16 19:33:17 INFO node.Application: Starting Channel loggerchannel
17/10/16 19:33:17 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: loggerchannel: Successfully registered new MBean.
17/10/16 19:33:17 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: loggerchannel started
17/10/16 19:33:17 INFO node.Application: Starting Sink loggersink
17/10/16 19:33:17 INFO node.Application: Starting Source logsource
17/10/16 19:33:17 INFO source.ExecSource: Exec source starting with command:tail -F /opt/gen_logs/logs/access.log
it doesn’t write any files to hdfs sink. I did verify each & every line of the configuration file. below is my flume configuration file:
fmp.sources = logsource
fmp.sinks = loggersink hdfssink
fmp.channels = loggerchannel hdfschannel
fmp.sources.logsource.type=exec
fmp.sources.logsource.command = tail -F /opt/gen_logs/logs/access.log
fmp.sinks.loggersink.type=logger
fmp.sinks.hdfssink.type=hdfs
fmp.sinks.hdfssink.hdfs.path=hdfs://quickstart.cloudera:8020/user/cloudera/flume
fmp.channels.loggerchannel.type=memory
fmp.channels.loggerchannel.capacity=1000
fmp.channels.loggerchannel.transactioncapacity=100
fmp.channels.hdfschannel.type=file
fmp.channels.hdfschannel.capacity=1000
fmp.channels.hdfschannel.transactioncapacity=100
fmp.sources.logsource.channels = hdfschannel loggerchannel
fmp.sinks.loggersink.channel = loggerchannel
fmp.sinks.hdfssink.channel = hdfschannel

HbaseTestingUtility: could not start my mini-cluster

I was trying to test my Hbase Code using HbaseTestingUtility. Everytime I started my mini-cluster using the code snippet below, I was getting an exception.
public void startCluster()
{
File workingDirectory = new File("./");
Configuration conf = new Configuration();
System.setProperty("test.build.data", workingDirectory.getAbsolutePath());
conf.set("test.build.data", new File(workingDirectory, "zookeeper").getAbsolutePath());
conf.set("fs.default.name", "file:///");
conf.set("zookeeper.session.timeout", "180000");
conf.set("hbase.zookeeper.peerport", "2888");
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.addResource(new Path("conf/hbase-site1.xml"));
try
{
masterDir = new File(workingDirectory, "hbase");
conf.set(HConstants.HBASE_DIR, masterDir.toURI().toURL().toString());
}
catch (MalformedURLException e1)
{
logger.error(e1.getMessage());
}
Configuration hbaseConf = HBaseConfiguration.create(conf);
utility = new HBaseTestingUtility(hbaseConf);
// Change permission for dfs.data.dir, please refer
// https://issues.apache.org/jira/browse/HBASE-5711 for more details.
try
{
Process process = Runtime.getRuntime().exec("/bin/sh -c umask");
BufferedReader br = new BufferedReader(new InputStreamReader(process.getInputStream()));
int rc = process.waitFor();
if (rc == 0)
{
String umask = br.readLine();
int umaskBits = Integer.parseInt(umask, 8);
int permBits = 0777 & ~umaskBits;
String perms = Integer.toString(permBits, 8);
logger.info("Setting dfs.datanode.data.dir.perm to " + perms);
utility.getConfiguration().set("dfs.datanode.data.dir.perm", perms);
}
else
{
logger.warn("Failed running umask command in a shell, nonzero return value");
}
}
catch (Exception e)
{
// ignore errors, we might not be running on POSIX, or "sh" might
// not be on the path
logger.warn("Couldn't get umask", e);
}
if (!checkIfServerRunning())
{
hTablePool = new HTablePool(conf, 1);
try
{
zkCluster = new MiniZooKeeperCluster(conf);
zkCluster.setDefaultClientPort(2181);
zkCluster.setTickTime(18000);
zkDir = new File(utility.getClusterTestDir().toString());
zkCluster.startup(zkDir);
utility.setZkCluster(zkCluster);
utility.startMiniCluster();
utility.getHBaseCluster().startMaster();
}
catch (Exception e)
{
e.printStackTrace();
logger.error(e.getMessage());
throw new RuntimeException(e);
}
}
}
I got the exception as follows.
2013-09-10 15:26:26 INFO ClientCnxn:849 - Socket connection established to localhost/127.0.0.1:2181, initiating session
2013-09-10 15:26:26 INFO ZooKeeperServer:839 - Client attempting to establish new session at /127.0.0.1:45934
2013-09-10 15:26:26 INFO ZooKeeperServer:595 - Established session 0x141074cd6150002 with negotiated timeout 180000 for client /127.0.0.1:45934
2013-09-10 15:26:26 INFO ClientCnxn:1207 - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x141074cd6150002, negotiated timeout = 180000
2013-09-10 15:26:26 INFO HBaseRPC:289 - Server at localhost/127.0.0.1:42926 could not be reached after 1 tries, giving up.
2013-09-10 15:26:26 WARN AssignmentManager:1714 - Failed assignment of -ROOT-,,0.70236052 to localhost,42926,1378806982623, trying to assign elsewhere instead; retry=0
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:42926 after attempts=1
at org.apache.hadoop.hbase.ipc.HBaseRPC.handleConnectionException(HBaseRPC.java:291)
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:259)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1305)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1261)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1248)
at org.apache.hadoop.hbase.master.ServerManager.getServerConnection(ServerManager.java:550)
at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:483)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1664)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1387)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1362)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1357)
at org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2236)
at org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:654)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:551)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:362)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:525)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:416)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:462)
at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1150)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1000)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
at com.sun.proxy.$Proxy20.getProtocolVersion(Unknown Source)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:183)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:335)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:312)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:364)
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:236)
... 14 more
2013-09-10 15:26:26 WARN AssignmentManager:1736 - Unable to find a viable location to assign region -ROOT-,,0.70236052
2013-09-10 15:27:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:27:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:28:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:28:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:29:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:29:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:29:42 ERROR MiniHBaseCluster:201 - Error starting cluster
java.lang.RuntimeException: Master not initialized after 200 seconds
at org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:206)
at org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:420)
at org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:196)
at org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:76)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:635)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:609)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:557)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:526)
at HBaseTesting.startCluster(HBaseTesting.java:131)
at HBaseTesting.main(HBaseTesting.java:62)
2013-09-10 15:29:42 INFO HMaster:1635 - Cluster shutdown requested
2013-09-10 15:29:42 INFO HRegionServer:1666 - STOPPED: Shutdown requested
2013-09-10 15:29:42 INFO HBaseServer:1651 - Stopping server on 42926
Could anyone help me with the solution.
Here's my solution.
I had to update my /etc/hosts file.
the 2 entries of interests are:
127.0.0.1 localhost
127.0.1.1 myhostname
I had to change the IP that myhostname is pointed to also point to
127.0.0.1.
Once my /etc/hosts was updated to look like this:
127.0.0.1 localhost
127.0.0.1 myhostname
The code started working.
(This is assuming a Linux server. replace /etc/hosts with the equivalent file for your operating system )
http://en.wikipedia.org/wiki/Hosts_(file)
Adding guava dependency to Gradle file worked for me.
compile group: 'com.google.guava', name: 'guava', version: '14.0'

Resources