Memory Size Errors When Running Magento's cron.sh - magento

We have set the nginx user to run the following command at 5 minute intervals.
sh /var/www/magento/cron.sh
This was executed fairly successfully for some time. Within the last month or so, it has begun to error out each time due to memory limit. I've increased the memory limit but that's only caused a higher limit that gets reached. It seems that there must be a bigger problem. The error is consistently as follows.
Fatal error: Allowed memory size of 262144 bytes exhausted
(tried to allocate 7680 bytes) in /var/www/magento/app/Mage.php on line 589
Below are the crons that magento has set to run every five minutes:
job: xmlconnect_notification_send_all
model: xmlconnect/observer::scheduledSend
file: /var/www/magento/app/code/core/Mage/XmlConnect/etc/config.xml
job: newsletter_send_all
model: newsletter/observer::scheduledSend
file: /var/www/magento/app/code/core/Mage/Newsletter/etc/config.xml
job: enterprise_staging_automates
model: enterprise_staging/observer::automates
file: /var/www/magento/app/code/core/Enterprise/Staging/etc/config.xml

Your error message has all the answers
Allowed memory size of 262144 bytes exhausted
While 262144 bytes seems like a big number, it's only 256KB, or around .25 MB
I believe the Magento documents recommend a memory limit of 256MB, with 512MB being a far more common in the wild. You'll need make sure your command PHP version (or the command line PHP launched by cron.sh) has it's memory_limit ini set correctly. One common pitfall here is to omit the M, or to use MB
; Will not do what you want it to
memory_limit=256
memory_limit=256MB
So make sure your configuration file is set something like this
memory_limit=256M

This is a server setup issue. Your php CLI memory_limit is set too low.
Bounce it up to 256M.
If you're not running your own server, then you need to contact your hosting provider.

Related

Kafka Streams app fails to start with “what(): Resource temporarily unavailable” in Cloud Foundry

Dear Stackoverflowians,
I’m having a problem with a Spring Cloud Stream app using a Kafka Streams binder. It’s only in our own Pivotal Cloud Foundry (CF) environment where this issue occurs. I have kind of hit a wall at this point and so I turn to you and your wisdom!
When the application starts up I see the following error
<snip>
2019-08-07T15:17:58.36-0700 [APP/PROC/WEB/0]OUT current active tasks: [0_3, 1_3, 2_3, 3_3, 4_3, 0_7, 1_7, 5_3, 2_7, 3_7, 4_7, 0_11, 1_11, 5_7, 2_11, 3_11, 4_11, 0_15, 1_15, 5_11, 2_15, 3_15, 4_15, 0_19, 1_19, 5_15, 2_19, 3_19, 4_19, 0_23, 1_23, 5_19, 2_23, 3_23, 4_23, 5_23]
2019-08-07T15:17:58.36-0700 [APP/PROC/WEB/0]OUT current standby tasks: []
2019-08-07T15:17:58.36-0700 [APP/PROC/WEB/0]OUT previous active tasks: []
2019-08-07T15:18:02.67-0700 [API/0] OUT Updated app with guid 2db4a719-53ee-4d4a-9573-fe958fae1b4f ({"state"=>"STOPPED"})
2019-08-07T15:18:02.64-0700 [APP/PROC/WEB/0]ERR terminate called after throwing an instance of 'std::system_error'
2019-08-07T15:18:02.64-0700 [APP/PROC/WEB/0]ERR what(): Resource temporarily unavailable
2019-08-07T15:18:02.67-0700 [CELL/0] OUT Stopping instance 516eca4f-ea73-4684-7e48-e43c
2019-08-07T15:18:02.67-0700 [CELL/SSHD/0]OUT Exit status 0
2019-08-07T15:18:02.71-0700 [APP/PROC/WEB/0]OUT Exit status 134
2019-08-07T15:18:02.71-0700 [CELL/0] OUT Destroying container
2019-08-07T15:18:03.62-0700 [CELL/0] OUT Successfully destroyed container
The key here being the line with
what(): Resource temporarily unavailable
The error is related to the number of partitions. If I set the partition count to 12 or less things work. If I double it, the process fails to start with this error.
This doesn’t happen on my local windows dev machine. It also doesn’t happen in my local docker environment when I wrap this app in a docker image and run. I can take the same image and push it to CF or push the app as a java app, I get this error.
Here is some information about the kafka streams app. We have an input topic with a number of partitions. The topic is the output of debezium connector and basically it’s a change log of a bunch of database tables. The topology is not super complex but it’s not trivial. Its job is to aggregate the table update information back into our aggregates. We end up with 17 local stores in the topology. I have a strong suspicion this issue has something to do with rocksdb and the resources available to the CF container the app is in. But I have not the faintest idea what the Resource is that’s “temporarily unavailable”.
As I mentioned, I tried deploying it as a docker container with various jdk8 jvms, different base images centos, debian, I tried various different CF java buildbacks, I tried limiting java heap with relation to max container memory size (thinking that maybe it has something to do with native memory allocation) to no avail.
I’ve also asked our ops folks to up some limits on the containers and open files limit changed from the initial 16k to now 500k+. I saw some file lock related errors as below but they went away after this change.
2019-08-01T15:46:23.69-0700 [APP/PROC/WEB/0]ERR Caused by: org.rocksdb.RocksDBException: lock : /home/vcap/tmp/kafka-streams/cms-cdc/0_7/rocksdb/input/LOCK: No locks available
2019-08-01T15:46:23.69-0700 [APP/PROC/WEB/0]ERR at org.rocksdb.RocksDB.open(Native Method)
However the error what(): Resource temporarily unavailable with higher number of partitions persists.
ulimit -a on the container looks like this
~$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1007531
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 524288
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I really do need to understand what the root of this error is. It’s hard to plan in this case not knowing what limit we’re hitting here.
Hope to hear your ideas. Thanks!
Edit:
Is there maybe some way to get more verbose error messages from the rocksdb library or a way to maybe build it so it outputs more info?
Edit 2
I have also tried to customize the rocksdb memory settings using org.apache.kafka.streams.state.RocksDBConfigSetter
The defaults are in org.apache.kafka.streams.state.internals.RocksDBStore#openDB(org.apache.kafka.streams.processor.ProcessorContext)
First I made sure the Java heap settings were well below the container process size limit and left nothing to the memory calculator by setting
JAVA_OPTS: -XX:MaxDirectMemorySize=100m -XX:ReservedCodeCacheSize=240m -XX:MaxMetaspaceSize=145m -Xmx1000m
With this I tried:
1.
Lowering the write buffer size
org.rocksdb.Options#setWriteBufferSize(long)
org.rocksdb.Options#setMaxWriteBufferNumber(int)
2.
Setting max_open_files to half the limit for the container (the total of all db instances)
org.rocksdb.Options#setMaxOpenFiles(int)
3.
I tried turning off the block cache altogether
org.rocksdb.BlockBasedTableConfig#setNoBlockCache
4.
I also tried setting cache_index_and_filter_blocks = true after re-enabling block cache
https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-and-filter-blocks
All to no avail. The above issue is still happening when I set higher number of partitions (24) on the input topic. Now that I have RocksDBConfigSetter with logging in it, I can see that the error happens exactly when the rocksdb is configured.
Edit 3
I still haven't gotten to the bottom of this. I have asked the question on https://www.facebook.com/groups/rocksdb.dev and was advised to trace system calls with strace or similar, but I was not able to obtain the required permissions to do that in our environment.
It has eaten up so much time that I had to settle for a workaround for now. What I ended up doing is refactored the topology to
1) minimize the number of materialized ktables (and the number of resulting rocksdb instances) and
2) break up the topology among multiple processes.
This allowed me to turn on and off topology parts in separate deployments with spring profiles and has given me some limited way forward for now.

Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 65488 bytes) In Magento 1.9.1

I have change my unsecure base url from {{base_url}} to {{unsecure_base_url}}.
After this I am getting this error
"Fatal error:Allowed memory size of 268435456 bytes exhausted (tried to allocate 65488 bytes)".
I have tried ti change set ini_set('memory_limit', '512M').
Now it show the error "Fatal error:Allowed memory size of 536870912 bytes exhausted (tried to allocate 65488 bytes)".
I also changed the values in databse core_config_data > unsecure/base_url” andchange the value to “{{base_url}}”;but still have the same error. Can anyone help me?.
Thanks in advance.
It sounds like your changes will do something after clearing cache. You can delete files in
{Magento dir}/var/cache
The fact that setting the limit to 512M does not help, is that it needs even more (sounds like a leak to me). Usually something is wrong, but you could try to increase memory usage until you reach a sane number.
The official documentation states a minimum of 254 and a required 512 megabyte of ram. Maybe the usage may increase when modules/plugins are added, custom layouts are used, etc.
http://magento.com/resources/previous-magento-system-requirements

Allowed memory size exhausted on Laravel Ardent

I got this error when saving with a file on input. The file is uploaded but I got this in saving process.
Allowed memory size of 134217728 bytes exhausted (tried to allocate 94 bytes) in ...vendor/laravelbook/ardent/src/LaravelBook/Ardent/Ardent.php
The size of the file is just 24kb. And the code is just a typical eloquent fill. The process is the following:
Get the file from the input, moved to storage location and make an insert for its file path in the database.
Update the file id of the target eloquent model.
I'm using:
"laravelbook/ardent": "v2.4.2"
Your script is eating all of the memory that PHP process can use, in your case it's 128 MB.
You can do 2 things:
Optimize your code and figure out which part of the code is the problem.
Set higher memory_limit with changing php.ini value of memory_limit to 256M for example, or by calling ini_set('memory_limit','256M');
It caused by the "php artisan optimize --force". When I removed the bootstrap/compiled.php it works again. :) Btw, how is that? is it a bug for "php artisan optimize --force" of Laravel?

JMeter issues when running large number of threads

I'm testing using Apache's Jmeter, I'm simply accessing one page of my companies website and turning up the number of users until it reaches a threshold, the problem is that when I get to around 3000 threads JMeter doesn't run all of them. Looking at the Aggregate Graph
it only runs about 2,536 (this number varies but is always around here) of them.
The partial run comes with the following exception in the logs:
01:16 ERROR - jmeter.JMeter: Uncaught exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Unknown Source)
at org.apache.jmeter.threads.ThreadGroup.start(ThreadGroup.java:293)
at org.apache.jmeter.engine.StandardJMeterEngine.startThreadGroup(StandardJMeterEngine.java:476)
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:395)
at java.lang.Thread.run(Unknown Source)
This behavior is consistent. In addition one of the times JMeter crashed in the middle outputting a file that said:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32756 bytes for ChunkPool::allocate
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (allocation.cpp:211), pid=10748, tid=11652
#
# JRE version: 6.0_31-b05
# Java VM: Java HotSpot(TM) Client VM (20.6-b01 mixed mode, sharing windows-x86 )
Any ideas?
I tried changing the heap size in jmeter.bat, but that didn't seem to help at all.
JVM is simply not capable of running so many threads. And even if it is, JMeter will consume a lot of CPU resources to purely switch contexts. In other words, above some point you are not benchmarking your web application but the client computer, hosting JMeter.
You have few choices:
experiment with JVM options, e.g. decrease default -Xss512K to something smaller
run JMeter in a cluster
use tools taking radically different approach like Gatling
I had a similar issue and increased the heap size in jmeter.bat to 1024M and that fixed the issue.
set HEAP=-Xms1024m -Xmx1024m
For the JVM, if you read hprof it gives you some solutions among which are:
switch to a 64 bits jvm ( > 6_u25)
with this you will be able to allocate more Heap (-Xmx) , ensure you have this RAM
reduce Xss with:
-Xss256k
Then for JMeter, follow best-practices:
http://jmeter.apache.org/usermanual/best-practices.html
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
Finally ensure you use last JMeter version.
Use linux OS preferably
Tune the TCP stack, limits
Success will depend on your machine power (cpu and memory) and your test plan.
If this is not enough (for 3000 threads it should be OK), you may need to use distributed testing
Increasing the heap size in jmeter.bat works fine
set HEAP=-Xms1024m -Xmx1024m
OR
you can do something like below if you are using jmeter.sh:
JVM_ARGS="-Xms512m -Xmx1024m" jmeter.sh etc.
I ran into this same problem and the only solution that helped me is: https://stackoverflow.com/a/26190804/5796780
proper 100k threads on linux:
ulimit -s 256
ulimit -i 120000
echo 120000 > /proc/sys/kernel/threads-max
echo 600000 > /proc/sys/vm/max_map_count
echo 200000 > /proc/sys/kernel/pid_max
If you don't have root access:
echo 200000 | sudo dd of=/proc/sys/kernel/pid_max
After increasing Xms et Xmx heap size, I had to make my Java run in 64 bits mode. In jmeter.bat :
set JM_LAUNCH=java.exe -d64
Obviously, you need to run a 64 bits OS and have installed Java 64 bits (see https://www.java.com/en/download/manual.jsp)

memory limit in Node.js (and chrome V8)

In many places in the web, you will see:
What is the memory limit on a node process?
and the answer:
Currently, by default V8 has a memory limit of 512mb on 32-bit systems, and 1gb on 64-bit systems. The limit can be raised by setting --max-old-space-size to a maximum of ~1gb (32-bit) and ~1.7gb (64-bit), but it is recommended that you split your single process into several workers if you are hitting memory limits.
Can somebody confirm this is the case as Node.js seems to update frequently?
And more importantly, will it be the case in the near future?
I want to write JavaScript code which might have to deal with 4gb of javascript objects (and speed might not be an issue).
If I can't do it in Node, I will end up doing in java (on a 64bit machine) but I would rather not.
This has been a big concern for some using Node.js, and there are good news. The new memory limit for V8 is now unknown (not tested) for 64bit and raised to as much as 32bit address space allows in 32bit environments.
Read more here: http://code.google.com/p/v8/issues/detail?id=847
Starting nodejs app with a heap memory of 8 GB
node --max-old-space-size=8192 app.js
See node command line options documentation or run:
node --help --v8-options
I'm running a proc now on Ubuntu linux that has a definite memory leak and node 0.6.0 is pushing 8gb. Think it's handled :).
Memory Limit Max Value is 3049 for 32bit users
If you are running Node.js with os.arch() === 'ia32' is true, the max value you can set is 3049
under my testing with node v11.15.0 and windows 10
if you set it to 3050, then it will overflow and equal to be set to 1.
if you set it to 4000, then it will equal to be set to 51 (4000 - 3049)
Set Memory to Max for Node.js
node --max-old-space-size=3049
Set Memory to Max for Node.js with TypeScript
node -r ts-node/register --max-old-space-size=3049
See: https://github.com/TypeStrong/ts-node/issues/261#issuecomment-402093879
It looks like it's true. When I had tried to allocate 50 Mb string in buffer:
var buf = new Buffer(50*1024*1024);
I've got an error:
FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory
Meantime there was about 457 Mb of memory usage by Node.js in process monitor.

Resources