PCF Task/Scheduler Job memory allocation from manifest.yml - spring-boot

I have a task app in the PCF. Whenever I run the task or schedule job.Memory allocation for task/job execution is not the same as for the app. It always allocated memory 512MB (default in my case). But app memory allocated for 2GB. Below is my manifest.yml
applications:
- name: hello-world
instances: 0
memory: 2G
I can allocate memory for task from the CLI like below. But I don't know about PCF scheduler job :
cf run-task hello-world ".java-buildpack/open_jdk_jre/bin/java org.springframework.boot.loader.JarLauncher" -m 2GB
What about the Production environment where I can't use CLI.
Is there any way I can allocate memory for the task and PCF scheduler job from manifest.yml.

The PCF Scheduler does not, at the time I write this, support setting the memory limit for a scheduled task. It will always use the default memory limit set by your platform operations team. This is a popular feature request though and on the roadmap.

Related

Docker image does not run

I just downloaded a new docker image. When I try to run it I get this log on my console
Setting Active Processor Count to 4
Calculating JVM memory based on 381456K available memory
unable to calculate memory configuration
fixed memory regions require 654597K which is greater than 381456K available for allocation: -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=142597K, -XX:ReservedCodeCacheSize=240M, -Xss1M * 250 threads
Please, how can I fix this?
I am assuming that you have multiple services and you are going to start them at a time. The issue is related to memory which docker and spring boot uses.
Try this:
environment:
- JAVA_TOOL_OPTIONS=-Xmx128000K
deploy:
resources:
limits:
memory: 800m
You have to provide memory which I mentioned in the .yaml file syntax.
While at the time of startup each service takes lot of memory, so there is no memory remaining for rest of the services and because of that other services starts failing with the memory related message.

service was unable to place a task because no container instance met requirements. The closest matching container-instance has insufficient CPU units

I am trying to run 2 tasks on the same EC2 container. The EC2 container is running on a t2.large type EC2 instance.
One of the tasks (which is a daemon) starts fine and is RUNNING.
The other task which is an application, does not start and I see the following errors in the Events tab.
service test-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance xxxxxx has insufficient CPU units available. For more information, see the Troubleshooting section.
service test-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance xxxxxx has insufficient memory available. For more information, see the Troubleshooting section.
I looked at the CPU and memory section for the container instance and the values are -
Registered Available
CPU 1024 1014
Memory 985 729
My task definitions for the task that does not run has the following CPU and Memory values -
"memory": 512,
"cpu": 10
The daemon that successfully runs on the same EC2 container instance also has the same values for memory and CPU.
I read thru the AWS docs here at https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-instance-requirement-error/ and tried to reduce the CPU and memory requirements for the test-service task definition but nothing helped. I also changed the instance type to something bigger but that did not help either.
Can someone please help me with what I should do CPU and memory for both the tasks (daemon and application) so they can run on the same EC2 container instance ?
Note: I cannot add another container to the ECS cluster.
The task definition sets CPU limit to 10 units which is probably insufficient in your case. ECS can manage CPU resources dynamically when you set up the value to 0. However it is not possible in case of memory.

Spring Cloud microservices memory usage

I'm running multiple microservices (Spring cloud + docker) in small/medium machines on AWS and recently I found that these machines are often exhausted and need rebooting.
I'm investigating the causes of this loss of power, thinking of possible memory leaks or misconfigurations on the instance/container.
I tried to limit the amount of memory these containers can use by doing:
docker run -m 500M --memory-swap 500M -d my-service:latest
At this point my service (standard spring cloud service with one single endpoint that writes stuff to a Redis DB, using spring-data-redis) didn't even start.
Increased the memory to 760M and it worked, but monitoring it with docker I see the minimum is:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
cd5f64aa371e 0.18% 606.9 MiB / 762.9 MiB 79.55% 102.4 MB / 99 MB 1.012 MB / 4.153 MB 60
I added some parameters to limit the JVM memory heap but it doesn't seem to reduce it very much:
_JAVA_OPTIONS: "-Xms8m -Xss256k -Xmx512m"
I'm running
Spring Cloud Brixton.M5
Spring Boot 1.3.2
Java 8 (Oracle JVM)
Docker
Spring data Redis 1.7.1
Is there a reason why such simple service uses so much memory to run? Are there any features I should disable to improve that?
We've investigated a number of things in a similar setup in terms of the JVM itself. A quick way to save some memory if using Java 8 is to use the following options:
-Xms256m -Xmx512m -XX:-TieredCompilation -Xss256k -XX:+UseG1GC -XX:+UseStringDeduplication
The G1GC is well documented, the UseStringDeduplication reduces heap usage by de-duplicating the storage of Strings in the heap (we found about 20% in a JSON/XML web service type environment), and the TieredCompilation makes a big difference in the use of CodeCache (from 70Mb down to 10Mb), as well as about 10% less Metaspace at the expense of about 10% startup time.
According to Spring's Installing Spring Boot applications page you can customize the application startup script by either environment variable or configuration file with the JAVA_OPTS variable.
For example: JAVA_OPTS=-Xmx64m

Yarn Application master and container allocation

In YARN, the application master requests the resource manager for the resources, so that the containers for that application can be launched.
Does the application master wait for all the resources to be allocated before it even launches the first container or it request for each and every container and as and when it obtains the resource for a container, it starts launching that specific container?
i.e.What about the situation when only part of the resources are available? Does it wait for the resource to be freed? or proceed based on the available resources?
How does the MR application master decides the resource requirement for an MR job? Does the YARN MR client determine this and sends it to AM or the AM finds it? If so, what is this based on? I believe that this is configurable but i may be talking about the default case when the memory, CPU are not provided.
No, the AM does not wait for all resources to be allocated. Instead it schedules / launches containers as resources are given to it by the resource manager.
The size requested for each container is defined in job configuration when the job is created by the driver. If values were not set explicitly for the job, values from mapred-site and mapred-default are used (see https://hadoop.apache.org/docs/r2.7.1/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml) for default values of mapreduce.map.memory.mb, mapreduce.reduce.memory.mb mapreduce.map.cpu.vcores and mapreduce.reduce.cpu.vcores. How these values get translated into resources granted is a bit complicated and based on scheduler being used, minimum container allocation settings, etc.
I don't know for certain if there's a maximum number of containers that the MR app master will request other than (# of input splits for mappers) + (number of reducers). The MR app master will release containers when it is done with them (e.g., if you have 1,000 mapper containers but only 20 reducers it will release the other 980 containers once they are no longer needed).

Mismatch in no of Executors(Spark in YARN Pseudo distributed mode)

I am running Spark using YARN(Hadoop 2.6) as cluster manager. YARN is running in Pseudo distributed mode. I have started the spark shell with 6 executors and was expecting the same
spark-shell --master yarn --num-executors 6
But whereas in the Spark Web UI, I see only 4 executors
Any reason for this?
PS : I ran the nproc command in my Ubuntu(14.04) and give below is the result. I believe this mean, my system has 8 cores
mountain#mountain:~$ nproc
8
did you take in account spark.yarn.executor.memoryOverhead?
possobly it creates hiden memory requrement and finaly yarn could not provide whole resources.
also, note that yarn round container size to yarn.scheduler.increment-allocation-mb.
all detail here:
http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/
This happens when there are not enough resources on your cluster to start more executors. Following things are taken into account
Spark executor runs inside a yarn container. This container size is determined from the value of yarn.scheduler.minimum-allocation-mb in yarn-site.xml. Check this property. If your existing containers consume all available memory then more memory will not be available for new containers. so no new executors will be started
The storage memory column in the UI displays the amount of memory used for execution and RDD storage. By default, this equals (HEAP_SPACE - 300MB) * 75%. The rest of the memory is used for internal metadata, user data structures and other stuffs. ref(Spark on YARN: Less executor memory than set via spark-submit)
I hope this helps.

Resources