Parse Open Source server on heroku throwing memory quota exceeded - heroku

I am running a parse server on heroku with Mlab.
When I running a search query on a table (Who have 188K records),I am receiving below error:
Process running mem=558M(109.2%)
2019-11-18T18:16:48.355162+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
I tried to run server via node --max-old-space-size=4096 parse/server.js but still the issue not resolved.
Please advise an solution.

Related

I keep getting this Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch

I recently installed Heroku Redis. Until then, the app worked just fine. I am using Bull for queuing and ioredis as the redis library. I had connection issues initially but I have resolved that as I no longer get the error. However, this new Error described shows up.
Please check these details below;
Package.json Start Script
"scripts": {
"start": "sh ./run.sh"
}
run.sh file
node ./app/services/queues/process.js &&
node server.js
From the logs on the heroku console, I see this.
Processing UPDATE_USER_BOOKING... Press [ctrl C] to Cancel
{"level":"info","message":"mongodb connected"}
1 is my log in the process script. This tells me that the consumer is running and ready to process any data it receives.
2 Tells me that mongo is connected. It can be found in my server.js(entry file).
My challenge is after those 2 lines, it then shows this;
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
Stopping process with SIGKILL
Error waiting for process to terminate: No child processes
Process exited with status 22
State changed from starting to crashed
So, I don't know why this is happening even when I have the PORT sorted out already as described in their docs. See this for clarity:
app.listen(process.env.PORT || 4900, ()=>{})
Note: It was working before until I introduced the Redis bit just a day ago.
Could there be an issue with the way I am running both server and the Queue process in the package.json file? I have been reading answers similar to this, but they are usually focused on the PORT fix which is not my own issue as far as I know.
TroubleShooting : I removed the queue process from the start script and the issue was gone. I had this instead
"scripts": {
"start": "node server.js -p $PORT"
}
So it becomes clear that this line below;
node ./app/services/queues/process.js was the issue
Now, How then do I run this queue process script? I need it to run to listen to any subscription and then run the processor script. It works fine locally with the former start script.
Please Note: I am using Bull for the Queue. I followed this guide to implement it and it worked fine locally.
Bull Redis Implementation Nodejs Guide
I will appreciate any help on this as I am currently blocked on my development.
So I decided to go another way. I read up on how to run background jobs on heroku with Bull and I got a guide which I implemented. The idea is to utilize Node's concurrency API. For the guide a wrapper was used called throng to implement this.
I removed the process file and just wrapped my consumer script inside the start function and passed that to throng.
Link to heroku guide on enabling concurrency in your app
Result: I started getting EADDR in use Error which was because that app.listen() is being run twice..
Solution: I had to wrap the app.listen function inside a worker and pass it to throng and it worked fine.
Link to the solution to the EADDR in use Error
On my local Machine, I was able to push to the Queue and consume from it. After deploying to heroku, I am not getting any errors so far.
I have tested the update on heroku and it works fine too.

Laravel Heroku App Suddenly Crashing after Deploy

I made a few changes to a Blade template - no changes to controllers, etc. - and confirmed that there are no errors locally.
I pushed the changes to Github and triggered a build and deploy of my Laravel application.
However, my application didn't start and now the logs read:
2019-01-14T16:41:22.580202+00:00 app[web.1]: DOCUMENT_ROOT changed to 'public/'
2019-01-14T16:41:22.656846+00:00 app[web.1]: Optimizing defaults for 1X dyno...
2019-01-14T16:41:22.690437+00:00 app[web.1]: 2 processes at 256MB memory limit.
2019-01-14T16:41:22.707069+00:00 app[web.1]: Starting php-fpm...
2019-01-14T16:41:23.935071+00:00 heroku[web.1]: State changed from starting to crashed
2019-01-14T16:41:23.815103+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2019-01-14T16:41:23.815215+00:00 heroku[web.1]: Stopping process with SIGKILL
2019-01-14T16:41:23.914103+00:00 heroku[web.1]: Process exited with status 137
I tried to restart the dynos to see if that would have an effect but it didn't. I did some searching on StackOverflow but couldn't find anything that was particularly helpful.
I do have a user.ini file with the 256MB memory limit set (as is reflected in the logs) but didn't make any changes to that.
I have not tried reverting my changes to the Blade template because I don't understand how that could lead to this boot timeout error.
The comment from #ceejayoz helped me figure out what was wrong. Reverting changes one by one led me to a fairly obvious issue that I was able to correct and redeploy without issue.

Unable to mount new volume on node

Hi im trying to mount a new volume for my db pod, i execute kubectl describe pod rc-chacha-5064p to see what its taking so long and i get the following
FailedMount AttachVolume.Attach failed for volume "db-xxxx-disk-pv" : googleapi: Error 403: Exceeded limit 'maximum_persistent_disks' on resource 'gke-xxxx-cluster-1-db-pool-xxxxx-xxxx'. Limit: 16.0
is there a way to raise that limit, i already went trough google quotas but there is nothing about this kind of restriction, any help would be appreciated
This is not a quota issue but a node level limit. Using beta apis, you can create a machine type which can mount more number of disks. See this https://cloud.google.com/compute/docs/disks/#increased_persistent_disk_limits

Redis Cache - "Server Closed the connection" error

I was running some tests to understand the MaxMemory-Reserved & MaxMemory-Policy and we faced “Server Closed the connection” error few times when Redis DB was almost full. Here are the details:
1) Created the Redis Cache with Standard C1(1 GB) tier and chose “allkeys-lru” and max-memory-reserved as 50 MB
2) Ran the Redis Benchmark tool to add the Keys in Redis DB to make sure Redis DB is almost full.
3) As soon as DB reached around ~960-980 MB, again ran Benchmark tool to add some more keys and got following error. In which all scenarios this error can occur?
Note: The Connected_Clients value was 0 when we ran the info command just before we encountered this error.
4) At same time ran the info command on Azure Portal Console and got the output as “Error”.
5) This error lasted approximately for 2-3 Mins and we were able to add keys after that. And once we ran the info command again, we got following stat. Here we see that difference between used_memory and used_memory_rss is around 76 MB. Do you think the above error could be because of this?
info
Server redis_version:3.2.3
redis_mode:standalone
os:Windows
arch_bits:64
multiplexing_api:winsock_IOCP
hz:10
Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
client_total_writes_outstanding:0
client_total_sent_bytes_outstanding:0
blocked_clients:0
Memory
used_memory:968991592
used_memory_human:924.10M
used_memory_rss:1049776128
used_memory_rss_human:1001.14M
used_memory_peak:1070912296
used_memory_peak_human:1021.30M
used_memory_lua:37888
maxmemory:1100000000
maxmemory_human:1.02G
maxmemory_policy:allkeys-lru
mem_allocator:jemalloc-3.6.0 #
Most likely you are running into scenario of high un-authenticated connections. Redis-benchmark first creates all the client connections (in your case -c 400 connections) and then authenticates them. The delay in auth causes high number of unauthenticated connections from a single IP and Azure Redis Cache closes them for DOS protection. Hence, the error “Server closed the connection”
You can try the redis-benchmark from here, which I have modified to authenticate as soon as a connection has been made and should solve this issue.

[Error]: Failed to run command with error: Error Domain=Parse Code=428

I get this error sometimes when trying to save things to Parse or to fetch data from it.
This is not constant and appear once in a while making the operation to fail.
I have contacted Parse for that. Here is their answer:
Starting on 4/28/2016, apps that have not migrated their database may see a "428" error code if the request cannot be handled by the remaining shared pool of resources. If you see this error in your logs, we highly recommend migrating the database for your app without delay.
Means this happens because of starting this date all apps are on low priority but those who started DB migration. So, Migration of the DB should resolve that.

Resources