service was unable to place a task because no container instance met requirements. The closest matching container-instance has insufficient CPU units - amazon-ec2

I am trying to run 2 tasks on the same EC2 container. The EC2 container is running on a t2.large type EC2 instance.
One of the tasks (which is a daemon) starts fine and is RUNNING.
The other task which is an application, does not start and I see the following errors in the Events tab.
service test-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance xxxxxx has insufficient CPU units available. For more information, see the Troubleshooting section.
service test-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance xxxxxx has insufficient memory available. For more information, see the Troubleshooting section.
I looked at the CPU and memory section for the container instance and the values are -
Registered Available
CPU 1024 1014
Memory 985 729
My task definitions for the task that does not run has the following CPU and Memory values -
"memory": 512,
"cpu": 10
The daemon that successfully runs on the same EC2 container instance also has the same values for memory and CPU.
I read thru the AWS docs here at https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-instance-requirement-error/ and tried to reduce the CPU and memory requirements for the test-service task definition but nothing helped. I also changed the instance type to something bigger but that did not help either.
Can someone please help me with what I should do CPU and memory for both the tasks (daemon and application) so they can run on the same EC2 container instance ?
Note: I cannot add another container to the ECS cluster.

The task definition sets CPU limit to 10 units which is probably insufficient in your case. ECS can manage CPU resources dynamically when you set up the value to 0. However it is not possible in case of memory.

Related

Docker image does not run

I just downloaded a new docker image. When I try to run it I get this log on my console
Setting Active Processor Count to 4
Calculating JVM memory based on 381456K available memory
unable to calculate memory configuration
fixed memory regions require 654597K which is greater than 381456K available for allocation: -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=142597K, -XX:ReservedCodeCacheSize=240M, -Xss1M * 250 threads
Please, how can I fix this?
I am assuming that you have multiple services and you are going to start them at a time. The issue is related to memory which docker and spring boot uses.
Try this:
environment:
- JAVA_TOOL_OPTIONS=-Xmx128000K
deploy:
resources:
limits:
memory: 800m
You have to provide memory which I mentioned in the .yaml file syntax.
While at the time of startup each service takes lot of memory, so there is no memory remaining for rest of the services and because of that other services starts failing with the memory related message.

PCF Task/Scheduler Job memory allocation from manifest.yml

I have a task app in the PCF. Whenever I run the task or schedule job.Memory allocation for task/job execution is not the same as for the app. It always allocated memory 512MB (default in my case). But app memory allocated for 2GB. Below is my manifest.yml
applications:
- name: hello-world
instances: 0
memory: 2G
I can allocate memory for task from the CLI like below. But I don't know about PCF scheduler job :
cf run-task hello-world ".java-buildpack/open_jdk_jre/bin/java org.springframework.boot.loader.JarLauncher" -m 2GB
What about the Production environment where I can't use CLI.
Is there any way I can allocate memory for the task and PCF scheduler job from manifest.yml.
The PCF Scheduler does not, at the time I write this, support setting the memory limit for a scheduled task. It will always use the default memory limit set by your platform operations team. This is a popular feature request though and on the roadmap.

Hadoop 2.x on amazon ec2 t2.micro

I'm trying to install and configure Hadoop 2.6 on Amazon EC2 t2.micro instance (The Free one, with only 1GB RAM) in Pseudo-Distributed Mode.
I could configure and start all the daemons (ie. Namenode,Datanode,ResourceManager,NodeManager). But When I tried to run a mapreduce wordcount example, it is failing.
I dont know if its failing due to low memory ( Since t2.micro has only 1GB of memory and some of it is taken up by Host OS, Ubuntu in my case). Or can it be some other reason?
I'm using default memory settings. If I can tweak down everything to minimum memory settings will it solve the problem? What is the minimum memory in mb that can be assigned to containers.
Thanks a lot Guys. I'll appreciate if you can provide me with some information.
Without tweaking any memory settings I could run a pi example with 1 mapper and 1 reducer sometimes only on the free tier t2.micro instance, it fails most of the time.
By using the memory optimized r3.large instance with 15GB RAM everything works perfect. All jobs get completed without failure.

Marathon kills container when requests increase

I deploy Docker containers on Mesos(0.21) and Marathon(0.7.6) on Google Cloud Engine.
I use JMeter to test a REST service that run on Marathon. When the concurrent requests are less than 10, it works normal, but when the concurrent requests are over 50, the container is killed and Mesos start another container. I increase RAM, CPU but it still happens.
This is log in /var/log/mesos/
E0116 09:33:31.554816 19298 slave.cpp:2344] Failed to update resources for container 10e47946-4c54-4d64-9276-0ce94af31d44 of executor dev_service.2e25332d-964f-11e4-9004-42010af05efe running task dev_service.2e25332d-964f-11e4-9004-42010af05efe on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/612/cgroup: Failed to open file '/proc/612/cgroup': No such file or directory
The error message you're seeing is actually another symptom, not the root cause of the issue. There's a good explanation/discussion in this Apache Jira bug report:
https://issues.apache.org/jira/browse/MESOS-1837
Basically, your container is crashing for one reason or another and the /proc/pid#/ directory is getting cleared without Mesos being aware, so it throws the error message you found when it goes to check that /proc directory.
Try setting your allocated CPU higher in the JSON file describing your task.

Error in autoscaling in ec2?

I am using the autoscaling feature.
I set up the entire thing but the instance were automatically launched and terminated even if the instance does not reach the threshold.
I followed the steps:
Created an instance
Created a load balancer and registered an instance
Created a launch configuration
Created a cloud watch that cpu >=50 %\
An autoscale policy that launches and terminates the instance when CPU >=50 %
But as soon as I apply the policy the instances begin to launch and terminated without any CPU load and it continues
Cause: At 2014-01-14T10:51:08Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 1.
StartTime: 2014-01-14T10:51:08.791Z
Cause: At 2014-01-14T10:02:16Z an instance was taken out of service in response to a system health-check.
UPDATE
Documentation:
Follow the instructions on how to Set Up an Auto-Scaled and Load-Balanced Application
Notes:
The instance, created outside of AutoScaling Group can be added to Elastic Load Balancer, but will not be monitored or managed by AutoScaling group.
Instance, created outside of AutoScaling Group can be marked as unhealthy by Elastic Load Balancer if the health check fails, but it will not cause AutoScaling Group to spawn a new instance.

Resources