Hadoop HiveServer2 worker threads - how to get info about them - hadoop

Was getting
HiveServer2 Process - Thrift server getting "Connection refused"
hiveserver2.log flooded with messages similar to
2016-01-06 12:19:57,617 WARN [Thread-8]: server.TThreadPoolServer (TThreadPoolServer.java:serve(184)) - Task has been rejected by ExecutorService 9 times till timedout, reason: java.util.concurrent.RejectedExecutionException: Task org.apache.thrift.server.TThreadPoolServer$WorkerProcess#753dd4d2 rejected from java.util.concurrent.ThreadPoolExecutor#4408d67c[Running, pool size = 500, active threads = 500, queued tasks = 0, completed tasks = 12772]
being those active 500 the max default hive.server2.thrift.http.max.worker.threads
This was sorted by restarting HS2 but how can I gather more info about
which YARN job a worker thread maps to
or for how long it has been running ?

Related

The oozie job does not run with the message [AM container is launched, waiting for AM container to Register with RM]

I ran a shell job among the oozie examples.
However, YARN application is not executed.
Detail information YARN UI & LOG:
https://docs.google.com/document/d/1N8LBXZGttY3rhRTwv8cUEfK3WkWtvWJ-YV1q_fh_kks/edit
YARN application status is
Application Priority: 0 (Higher Integer value indicates higher priority)
YarnApplicationState: ACCEPTED: waiting for AM container to be allocated, launched and register with RM.
Queue: default
FinalStatus Reported by AM: Application has not completed yet.
Finished: N/A
Elapsed: 20mins, 30sec
Tracking URL: ApplicationMaster
Log Aggregation Status: DISABLED
Application Timeout (Remaining Time): Unlimited
Diagnostics: AM container is launched, waiting for AM container to Register with RM
Application Attempt status is
Application Attempt State: FAILED
Elapsed: 13mins, 19sec
AM Container: container_1607273090037_0001_02_000001
Node: N/A
Tracking URL: History
Diagnostics Info: ApplicationMaster for attempt appattempt_1607273090037_0001_000002 timed out
Node Local Request Rack Local Request Off Switch Request
Num Node Local Containers (satisfied by) 0
Num Rack Local Containers (satisfied by) 0 0
Num Off Switch Containers (satisfied by) 0 0 1
nodemanager log
2020-12-07 01:45:16,237 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler: Starting container [container_1607273090037_0001_01_000001]
2020-12-07 01:45:16,267 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1607273090037_0001_01_000001 transitioned from SCHEDULED to RUNNING
2020-12-07 01:45:16,267 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1607273090037_0001_01_000001
2020-12-07 01:45:16,272 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-oozie/nm-local-dir/usercache/oozie/appcache/application_1607273090037_0001/container_1607273090037_0001_01_000001/default_container_executor.sh]
2020-12-07 01:45:17,301 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: container_1607273090037_0001_01_000001's ip = 127.0.0.1, and hostname = localhost.localdomain
2020-12-07 01:45:17,345 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Skipping monitoring container container_1607273090037_0001_01_000001 since CPU usage is not yet available.
2020-12-07 01:45:48,274 INFO logs: Aliases are enabled
2020-12-07 01:54:50,242 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Cache Size Before Clean: 496756, Total Deleted: 0, Public Deleted: 0, Private Deleted: 0
2020-12-07 01:58:10,071 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1607273090037_0001_000001 (auth:SIMPLE)
2020-12-07 01:58:10,078 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1607273090037_0001_01_000001
What is the problem ?

AWS Glue JOB: Command failed with error code 1

We have python script for our glue job and the triggered runs for every one hour to convert the JSON S3 to parquet files and we are getting following issue..the following logs are taken from cloudwatch for the jobId
:
CoarseGrainedExecutorBackend: Driver commanded a shutdown
18/06/25 08:54:03 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from ip-172-31-34-26.ec2.internal/172.31.34.26:36135 is closed
18/06/25 08:54:03 ERROR OneForOneBlockFetcher: Failed while starting block fetches
java.io.IOException: Connection from ip-172-31-34-26.ec2.internal/172.31.34.26:36135 closed
at org.apache.spark.network.client.TransportResponseHandler.channelInactive(TransportResponseHandler.java:146)
at org.apache.spark.network.server.TransportChannelHandler.channelInactive(TransportChannelHandler.java:108)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:227)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:220)
at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
at io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:278)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:227)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:220)
at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:227)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:220)
at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
at org.apache.spark.network.util.TransportFrameDecoder.channelInactive(TransportFrameDecoder.java:182)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:227)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:220)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1289)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:227)
at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:893)
at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:691)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:446)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
18/06/25 08:54:03 INFO CoarseGrainedExecutorBackend: Driver from 172.31.47.44:45951 disconnected during shutdown
18/06/25 08:54:03 INFO CoarseGrainedExecutorBackend: Driver from 172.31.47.44:45951 disconnected during shutdown
18/06/25 08:54:03 INFO RetryingBlockFetcher: Retrying fetch (1/3) for 1 outstanding blocks after 5000 ms
18/06/25 08:54:03 INFO MemoryStore: MemoryStore cleared
18/06/25 08:54:03 INFO BlockManager: BlockManager stopped
18/06/25 08:54:03 INFO ShutdownHookManager: Shutdown hook called
Open Glue> Jobs > Edit your Job> Script libraries and job parameters (optional) > Job parameters near the bottom
Set the following: key: --conf value: spark.yarn.executor.memoryOverhead=1024 spark.driver.memory=10g
There is no way to fix this issue,AWS Glue has so many enhancements that are to be done.
As of now we split our folder into multiple sub folders and split our glue job to two to handle this scenario,and also the memory overhead was not being considered when we give our own script option.
You need to reduce the number of files that you are storing into the S3 bucket by accumulating the data into a single big file,glue is efficient on bigger files

Setting up a RocketMQ cluster: slave not visible & not replicating

I'm trying to set up a RocketMQ cluster, with a single name server, 1 master and 2 slaves. But, I'm running into some problems.
The version I'm running is downloaded from github/rocketmq-all-4.1.0-incubating.zip.
The brokers are run using mqbroker -c broker.conf, where broker.conf
differs for master and slave. For the master I have:
listenPort=10911
brokerName=mybroker
brokerClusterName=mybrokercluster
brokerId=0
deleteWhen=04
fileReservedTime=48
brokerRole=SYNC_MASTER
flushDiskType=ASYNC_FLUSH
And for slaves:
listenPort=10911
brokerName=mybroker
brokerClusterName=mybrokercluster
brokerId=1
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH
The second slave has brokerId=2.
Brokers start up fine, some parts of the logs for a slave:
2017-10-02 20:31:35 INFO main - brokerRole=ASYNC_MASTER
2017-10-02 20:31:35 INFO main - flushDiskType=ASYNC_FLUSH
(...)
2017-10-02 20:31:35 INFO main - Replace, key: brokerId, value: 0 -> 1
2017-10-02 20:31:35 INFO main - Replace, key: brokerRole, value:
ASYNC_MASTER -> SLAVE
(...)
2017-10-02 20:31:37 INFO main - Set user specified name server address:
172.22.1.38:9876
2017-10-02 20:31:37 INFO ShutdownHook - Shutdown hook was invoked, 1
2017-10-02 20:31:37 INFO ShutdownHook - shutdown thread
PullRequestHoldService interrupt false
2017-10-02 20:31:37 INFO ShutdownHook - join thread PullRequestHoldService
eclipse time(ms) 0 90000
2017-10-02 20:31:37 WARN ShutdownHook - unregisterBroker Exception,
172.22.1.38:9876
org.apache.rocketmq.remoting.exception.RemotingConnectException: connect to
<172.22.1.38:9876> failed
at
org.apache.rocketmq.remoting.netty.NettyRemotingClient.invokeSync(NettyRemotingClient.java:359)
~[rocketmq-remoting-4.1.0-incubating.jar:4.1.0-incubating]
at
org.apache.rocketmq.broker.out.BrokerOuterAPI.unregisterBroker(BrokerOuterAPI.java:221)
~[rocketmq-broker-4.1.0-incubating.jar:4.1.0-incubating]
at
org.apache.rocketmq.broker.out.BrokerOuterAPI.unregisterBrokerAll(BrokerOuterAPI.java:198)
~[rocketmq-broker-4.1.0-incubating.jar:4.1.0-incubating]
at
org.apache.rocketmq.broker.BrokerController.unregisterBrokerAll(BrokerController.java:623)
[rocketmq-broker-4.1.0-incubating.jar:4.1.0-incubating]
at
org.apache.rocketmq.broker.BrokerController.shutdown(BrokerController.java:589)
[rocketmq-broker-4.1.0-incubating.jar:4.1.0-incubating]
at org.apache.rocketmq.broker.BrokerStartup$1.run(BrokerStartup.java:218)
[rocketmq-broker-4.1.0-incubating.jar:4.1.0-incubating]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
2017-10-02 20:31:37 INFO ShutdownHook - Shutdown hook over, consuming total
time(ms): 25
2017-10-02 20:31:45 INFO BrokerControllerScheduledThread1 - dispatch behind
commit log 0 bytes
2017-10-02 20:31:45 INFO BrokerControllerScheduledThread1 - Slave fall
behind master: 0 bytes
2017-10-02 20:31:45 INFO BrokerControllerScheduledThread1 - register broker
to name server 172.22.1.38:9876 OK
2017-10-02 20:32:15 INFO BrokerControllerScheduledThread1 - register broker
to name server 172.22.1.38:9876 OK
As I suspect the broker is trying to connect to the name server, which
isn't running initially, so it retries and eventually succeeds?
However, later when trying clusterList I only see one broker listed, which happens to be a slave (172.22.1.17) and has brokerId=2 in the configuration (although here it's listed as 0):
$ ./mqadmin clusterList -n 172.22.1.38:9876
#Cluster Name #Broker Name #BID #Addr
#Version #InTPS(LOAD) #OutTPS(LOAD) #PCWait(ms) #Hour
#SPACE
mybrokercluster mybroker 0 172.22.1.17:10911
V4_1_0_SNAPSHOT 0.00(0,0ms) 0.00(0,0ms) 0
418597.80 -1.0000
Moreover, when sending messages to the master, I get SLAVE_NOT_AVAILABLE.
Why is that? Are the brokers configured properly? If so, wy does
clusterList report them incorrectly?
you should change slave port,as you know 10911 has been used by anthoer process(master node),slave should be use different tcp port(eg.10921/10931 and so on)
tips:my cluster deploy on one machine,so i changed tcp port and startup successful,if you master&slave deploy on different machine and startup failed,you should visit rocketmq error log for more information.
notice:one master which have more than one slave,brokerId should be different

Spark NullPointerException on SQLListener.onTaskEnd while finishing task

I have a Spark application using Scala which perform series of transformation, then writing the result to parquet file.
The transformation part finished without problem, the result output is written to HDFS correctly. The application is running on top of YARN cluster of 30 nodes.
However, the Spark application itself will not complete and exit the YARN. It will remain in resource manager.
After hanging for about an hour (consuming resources and vcores), then either it finishes or throw an error and killed itself.
Here is the error log of the application. Appreciate if anyone can shed some light on this matter.
16/08/24 14:51:12 INFO impl.ContainerManagementProtocolProxy: Opening proxy : phhdpdn013x.company.com:8041
16/08/24 14:51:22 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (phhdpdn013x.company.com:54175) with ID 1
16/08/24 14:51:22 INFO storage.BlockManagerMasterEndpoint: Registering block manager phhdpdn013x.company.com:24700 with 2.1 GB RAM, BlockManagerId(1, phhdpdn013x.company.com, 24700)
16/08/24 14:51:29 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
16/08/24 14:51:29 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done
16/08/24 15:11:00 ERROR scheduler.LiveListenerBus: Listener SQLListener threw an exception
java.lang.NullPointerException
at org.apache.spark.sql.execution.ui.SQLListener.onTaskEnd(SQLListener.scala:167)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:42)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
16/08/24 15:11:46 ERROR scheduler.LiveListenerBus: Listener SQLListener threw an exception
java.lang.NullPointerException
aa
What is your version of Spark?
Your ERROR looks a lot like this issue
https://issues.apache.org/jira/browse/SPARK-12339

AMQP 1.0 Qpid JMS and an Issue with Failover/Reconnect

Im using Qpid JMS 0.8.0 library in order to implement a standalone java AMQP client. Because the underlying transport connection tends to break every couple of hours I have set the reconnection using following configuration:
failover:(amqps://someurl:5671)?failover.reconnectDelay=2000&failover.warnAfterReconnectAttempts=1
In accordance with Qpid client configuration explanation page I expect my client to keep trying to reconnect increasing the attempt delays for factor 2 (starting with 2 seconds). Instead, according to the log file, only two attempts to reconnect have been performed when a connection failure was detected and at the end the whole client application has been terminated, what I definitively would like to avoid! Here is the log file:
2016-03-22 14:29:40 INFO AmqpProvider:1190 - IdleTimeoutCheck closed the transport due to the peer exceeding our requested idle-timeout.
2016-03-22 14:29:40 DEBUG FailoverProvider:761 - Failover: the provider reports failure: Transport closed due to the peer exceeding our requested idle-timeout
2016-03-22 14:29:40 DEBUG FailoverProvider:519 - handling Provider failure: Transport closed due to the peer exceeding our requested idle-timeout
2016-03-22 14:29:40 DEBUG FailoverProvider:653 - Connection attempt:[1] to: amqps://publish.preops.nm.eurocontrol.int:5671 in-progress
2016-03-22 14:29:40 INFO FailoverProvider:659 - Connection attempt:[1] to: amqps://publish.preops.nm.eurocontrol.int:5671 failed
2016-03-22 14:29:40 WARN FailoverProvider:686 - Failed to connect after: 1 attempt(s) continuing to retry.
2016-03-22 14:29:42 DEBUG FailoverProvider:653 - Connection attempt:[2] to: amqps://publish.preops.nm.eurocontrol.int:5671 in-progress
2016-03-22 14:29:42 INFO FailoverProvider:659 - Connection attempt:[2] to: amqps://publish.preops.nm.eurocontrol.int:5671 failed
2016-03-22 14:29:42 WARN FailoverProvider:686 - Failed to connect after: 2 attempt(s) continuing to retry.
2016-03-22 14:29:43 DEBUG ThreadPoolUtils:156 - Shutdown of ExecutorService: java.util.concurrent.ThreadPoolExecutor#778970af[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] is shutdown: true and terminated: true took: 0.000 seconds.
2016-03-22 14:29:45 DEBUG ThreadPoolUtils:192 - Waited 2.004 seconds for ExecutorService: java.util.concurrent.ScheduledThreadPoolExecutor#877a470[Shutting down, pool size = 1, active threads = 0, queued tasks = 1, completed tasks = 3] to terminate...
2016-03-22 14:29:46 DEBUG ThreadPoolUtils:156 - Shutdown of ExecutorService: java.util.concurrent.ScheduledThreadPoolExecutor#877a470[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 4] is shutdown: true and terminated: true took: 2.889 seconds.
Any idea, what I’m doing wrong here? Basically, what I'm looking for to achieve is a client which is capable to detect transport connection failure and try to reconnect every 5-10 seconds.
Many thanks!

Resources