I'm trying to run Chronos on Mesos, but all my jobs are stuck in a queueing state.
systemctl status chronos -l shows:
Mar 20 20:21:08 core-mq3 chronos[17940]: [2017-03-20 20:21:08,985] WARN Insufficient resources remaining for task 'ct:1490040556081:0:JobName:', will append to queue. (Needed: [cpus: 0.5 mem: 256.0 disk: 256.0], Found: [cpus: 1.8 mem: 11034.0 disk: 60398.8,cpus: 2.0 mem: 6542.0 disk: 60399.0]) (org.apache.mesos.chronos.scheduler.mesos.MesosJobFramework:155)
So, it is refusing the offers even though all the resources are more than required.
This was a red herring. There was a constraint that the agent did not fulfill, which is why it couldn't run the task.
Running curl GET <chronos>/scheduler/jobs/search?name=<job> gave me all the details of the job, which I used to verify that the constraint was not being fulfilled.
Related
What version of Go are you using?
go version go1.13 linux/amd64
What OS and processor architecture are you using?
OS: CentOS-7 x86_64 GNU/Linux
What did I do?
Span 100 threads using goroutine reach thread read data from
redis,data is in json which contains key csv_file path.Each csv
contains 1000 tokens. Each thread pop data from redis and read csv and
spawn 1000 threads, how much tokens that much thread and call APNS
push. During the push getting "EOF". 90% calls failed with this error.
I have set my OS ulimit is 500000
what did you expect to see? It should get processed 10M tokens in 1 minute
What did I see instead?
I am getting following error when call APNS sevice with load testing
on dev certificate time="2020-02-25T08:54:44-05:00" level=info
msg="Push Error:%!(EXTRA *url.Error=Post
https://api.sandbox.push.apple.com/3/device/eoQtFQtlL4s:APA91bGrV0HqQH4qbxe
ZCJrX-XMHj63: EOF)" 90% calls failed with this error. each thread with
1000 tokens publishe taken 2s with EOF error which is extermelly slow.
Further informations:
Aim:
My aim is publish 10M tokens in 1 minute
Where I run the code:
I am running in golang code in aws EC2 instance in Virginia us-east-1
My question:
When this error came and how can I fix?
It will be great if I can get help.
Trying to start an h2o cluster on (MapR) hadoop via python
# startup hadoop h2o cluster
import os
import subprocess
import h2o
import shlex
import re
from Queue import Queue, Empty
from threading import Thread
def enqueue_output(out, queue):
"""
Function for communicating streaming text lines from seperate thread.
see https://stackoverflow.com/questions/375427/non-blocking-read-on-a-subprocess-pipe-in-python
"""
for line in iter(out.readline, b''):
queue.put(line)
out.close()
# clear legacy temp. dir.
hdfs_legacy_dir = '/mapr/clustername/user/mapr/hdfsOutputDir'
if os.path.isdir(hdfs_legacy_dir ):
print subprocess.check_output(shlex.split('rm -r %s'%hdfs_legacy_dir ))
# start h2o service in background thread
local_h2o_start_path = '/home/mapr/h2o-3.18.0.2-mapr5.2/'
startup_p = subprocess.Popen(shlex.split('/bin/hadoop jar {}h2odriver.jar -nodes 4 -mapperXmx 6g -timeout 300 -output hdfsOutputDir'.format(local_h2o_start_path)),
shell=False,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# setup message passing queue
q = Queue()
t = Thread(target=enqueue_output, args=(startup_p.stdout, q))
t.daemon = True # thread dies with the program
t.start()
# read line without blocking
h2o_url_out = ''
while True:
try: line = q.get_nowait() # or q.get(timeout=.1)
except Empty:
continue
else: # got line
print line
# check for first instance connection url output
if re.search('Open H2O Flow in your web browser', line) is not None:
h2o_url_out = line
break
if re.search('Error', line) is not None:
print 'Error generated: %s' % line
sys.exit()
print 'Connection url output line: %s' % h2o_url_out
h2o_cnxn_ip = re.search('(?<=Open H2O Flow in your web browser: http:\/\/)(.*?)(?=:)', h2o_url_out).group(1)
print 'H2O connection ip: %s' % h2o_cnxn_ip
frequently throws a timeout error
Waiting for H2O cluster to come up...
H2O node 172.18.4.66:54321 requested flatfile
H2O node 172.18.4.65:54321 requested flatfile
H2O node 172.18.4.67:54321 requested flatfile
ERROR: Timed out waiting for H2O cluster to come up (300 seconds)
Error generated: ERROR: Timed out waiting for H2O cluster to come up (300 seconds)
Shutting down h2o cluster
Looking at the docs (http://docs.h2o.ai/h2o/latest-stable/h2o-docs/faq/general-troubleshooting.html) (and just doing a wordfind for the word "timeout"), was unable to find anything that helped the problem (eg. extending the timeout time via hadoop jar h2odriver.jar -timeout <some time> did nothing but extend the time until the timeout error popped up).
Have noticed that this happens often when there is another instance of an h2o cluster already up and running (which I don't understand since I would think that YARN could support multiple instances), yet also sometimes when there is no other cluster initialized.
Anyone know anything else that can be tried to solve this problem or get more debugging info beyond the error message being thrown by h2o?
UPDATE:
Trying to recreate the problem from the commandline, getting
[me#mnode01 project]$ /bin/hadoop jar /home/me/h2o-3.20.0.5-mapr5.2/h2odriver.jar -nodes 4 -mapperXmx 6g -timeout 300 -output hdfsOutputDir
Determining driver host interface for mapper->driver callback...
[Possible callback IP address: 172.18.4.62]
[Possible callback IP address: 127.0.0.1]
Using mapper->driver callback IP address and port: 172.18.4.62:29388
(You can override these with -driverif and -driverport/-driverportrange.)
Memory Settings:
mapreduce.map.java.opts: -Xms6g -Xmx6g -XX:PermSize=256m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Dlog4j.defaultInitOverride=true
Extra memory percent: 10
mapreduce.map.memory.mb: 6758
18/08/15 09:18:46 INFO client.MapRZKBasedRMFailoverProxyProvider: Updated RM address to mnode03.cluster.local/172.18.4.64:8032
18/08/15 09:18:48 INFO mapreduce.JobSubmitter: number of splits:4
18/08/15 09:18:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1523404089784_7404
18/08/15 09:18:48 INFO security.ExternalTokenManagerFactory: Initialized external token manager class - com.mapr.hadoop.yarn.security.MapRTicketManager
18/08/15 09:18:48 INFO impl.YarnClientImpl: Submitted application application_1523404089784_7404
18/08/15 09:18:48 INFO mapreduce.Job: The url to track the job: https://mnode03.cluster.local:8090/proxy/application_1523404089784_7404/
Job name 'H2O_66888' submitted
JobTracker job ID is 'job_1523404089784_7404'
For YARN users, logs command is 'yarn logs -applicationId application_1523404089784_7404'
Waiting for H2O cluster to come up...
H2O node 172.18.4.65:54321 requested flatfile
H2O node 172.18.4.67:54321 requested flatfile
H2O node 172.18.4.66:54321 requested flatfile
ERROR: Timed out waiting for H2O cluster to come up (300 seconds)
ERROR: (Try specifying the -timeout option to increase the waiting time limit)
Attempting to clean up hadoop job...
Killed.
18/08/15 09:23:54 INFO client.MapRZKBasedRMFailoverProxyProvider: Updated RM address to mnode03.cluster.local/172.18.4.64:8032
----- YARN cluster metrics -----
Number of YARN worker nodes: 6
----- Nodes -----
Node: http://mnode03.cluster.local:8044 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 7.0 GB used, 0 / 2 vcores used
Node: http://mnode05.cluster.local:8044 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 10.4 GB used, 0 / 2 vcores used
Node: http://mnode06.cluster.local:8044 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 10.4 GB used, 0 / 2 vcores used
Node: http://mnode01.cluster.local:8044 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 5.0 GB used, 0 / 2 vcores used
Node: http://mnode04.cluster.local:8044 Rack: /default-rack, RUNNING, 1 containers used, 7.0 / 10.4 GB used, 1 / 2 vcores used
Node: http://mnode02.cluster.local:8044 Rack: /default-rack, RUNNING, 1 containers used, 2.0 / 8.7 GB used, 1 / 2 vcores used
----- Queues -----
Queue name: root.default
Queue state: RUNNING
Current capacity: 0.00
Capacity: 0.00
Maximum capacity: -1.00
Application count: 0
Queue 'root.default' approximate utilization: 0.0 / 0.0 GB used, 0 / 0 vcores used
----------------------------------------------------------------------
WARNING: Job memory request (26.4 GB) exceeds queue available memory capacity (0.0 GB)
WARNING: Job virtual cores request (4) exceeds queue available virtual cores capacity (0)
ERROR: Only 3 out of the requested 4 worker containers were started due to YARN cluster resource limitations
----------------------------------------------------------------------
For YARN users, logs command is 'yarn logs -applicationId application_1523404089784_7404'
and noticing the later outputs
WARNING: Job memory request (26.4 GB) exceeds queue available memory capacity (0.0 GB)
WARNING: Job virtual cores request (4) exceeds queue available virtual cores capacity (0)
ERROR: Only 3 out of the requested 4 worker containers were started due to YARN cluster
I am confused by the reported 0GB mem. and 0 vcores becuase there are no other applications running on the cluster and looking at the cluster details in the YARN RM web UI shows
(using image, since could not find unified place in log files for this info and why the mem. availability is so uneven despite having no other running applications, I do not know). At this point, should mention that don't have much experience tinkering with / examining YARN configs, so it's difficult for me to find relevant information at this point.
Could it be that I am starting h2o cluster with -mapperXmx=6g, but (as shown in the image) one of the nodes only has 5g mem. available, so if this node is randomly selected to contribute to the initialized h2o application, it does not have enough memory to support the requested mapper mem.? Changing the startup command to /bin/hadoop jar /home/me/h2o-3.20.0.5-mapr5.2/h2odriver.jar -nodes 4 -mapperXmx 5g -timeout 300 -output hdfsOutputDir and start/stopping multiple times without error seems to support this theory (though need to check further to determine if I'm interpreting things correctly).
This is most likely because your Hadoop cluster is busy, and there just isn't space to start new yarn containers.
If you ask for N nodes, then you either get all N nodes, or the launch process times out like you are seeing. You can optionally use the -timeout command line flag to increase the timeout.
I have installed single node Hadoop on Windows and it is apprently working.
Unfortunately, I can't run test application on it.
When I do
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.0.jar grep input output 'dfs[a-z.]+'
as described on it's page, I get it not returning to command prompt. On referenced job page I see
YarnApplicationState: ACCEPTED: waiting for AM container to be
allocated, launched and register with RM.
Diagnostics: [Mon Apr 23 00:46:44 +0300 2018] Application is added to
the scheduler and is not yet activated. Skipping AM assignment as
cluster resource is empty. Details : AM Partition =
<DEFAULT_PARTITION>; AM Resource Request = <memory:2048, vCores:1>;
Queue Resource Limit for AM = <memory:0, vCores:0>; User AM Resource
Limit of the queue = <memory:0, vCores:0>; Queue AM Resource Usage =
<memory:0, vCores:0>;
What does it mean and how to push it?
When trying to query from gpdb cluster. getting Out of memory error with error code 53400.
System Related information
TOTAL RAM =30G
SWAP =15G
gp_vmem_protect_limit=8192MB
TOTAL segment = 8 Primary, 8 mirror = 16
SEGMENT HOST=2
Getting error :
ERROR: Out of memory (seg2 slice109 datanode01:40002 pid=21691)
SQL state: 53400
Detail: VM protect failed to allocate 8388608 bytes from system, VM Protect 4161 MB available
We tried
gpconfig -c gp_vmem_protect_limit -v 4114
vm.overcommit_ratio = 95
Then, getting this error. P
ERROR: XX000: Canceling query because of high VMEM usage. Used: 3704MB, available 410MB, red zone: 3702MB
Also , getting this symptom
Prod=# show runaway_detector_activation_percent;
runaway_detector_activation_percent
-------------------------------------
90
(1 row)
Please suggest what could be the setting in this case.
Also, What is the root cause of OOM error?
Any help on it would be much appreciated?
I am checking out Apache Aurora (1.1.0)(0.16.0) and Apache Mesos (0.16.0) (1.1.0)with docker container. Here is an example Aurora job definition,
process_nginx = Process(
name='nginx',
cmdline=textwrap.dedent(r'''
exec /path_to/nginx -g "daemon off; pid /run/nginx.pid; error_log stderr notice;"
'''),
min_duration=3,
daemon=True,
)
task_nginx = Task(
name='nginx',
processes=[process_nginx,],
resources=Resources(
cpu=0.1,
ram=20*MB,
disk=50*MB,
),
finalization_wait=14,
)
job_nginx = Job(
cluster='x',
role='root',
name='nginx',
instances=6,
service=True,
task=task_nginx,
priority=1,
#tier='preferred',
constraints={
'X_HOST_MACHINE_ID': 'limit:2',
'HOST_TYPE.FRONTEND': 'true',
},
update_config=UpdateConfig(
batch_size=1,
watch_secs=29,
rollback_on_failure=True,
),
container=Docker(
image='my_nginx_docker_image_name',
parameters=[
{'name': 'network', 'value': 'host'},
{'name': 'log-driver', 'value': 'journald'},
{'name': 'log-opt', 'value': 'tag=nginx'},
{'name': 'oom-score-adj', 'value': '-500'},
{'name': 'memory-swappiness', 'value': '1'},
],
),
)
But, since specifying disk and ram limits bother me, I want to make both disabled.
problem 1
I thought only CPU resource would be isolated(=limited) if my all mesos agents are launched with the option --isolation=cgroups/cpu (not --isolation=cgroups/cpu,cgroups/mem).
But even in this case, all docker containers launched by mesos docker containerizer have --memory option, which is hard limit and causes OOM killer if a docker container requires more memory. (And it seems mesos docker containerizer does not support --memory-reservation.)
problem 2
Even in case of --isolation=cgroups/cpu, removing ram or disk parameter from Aurora Resource instance causes the following error.
Error loading configuration: TypeCheck(FAILED): MesosJob[task] failed: Task[resources] failed: Resources[ram] is required.
My question
Is it possible to disable memory and disk isolation ?
What is the difference between --isolation=cgroups/cpu and --isolation=cgroups/cpu,cgroups/mem?
As you've discovered, you can disable the memory and disk isolators in Mesos by not specifying them as part of the isolation agent flag. I'm unsure about the behavior of the Docker Containerizer in this scenario, but you might want to try using the Mesos Containerizer instead, as this is the preferred way to run Docker images in Mesos going forward.
As far as omitting the Resources from your Aurora config goes, unfortunately that won't be possible. Every Aurora job must specify its resource requirements so that the scheduler can match your task instances up with an offer from Mesos.