So I deployed my Discord bot and it seems fine until at any minute it just crashed. All my files has no errors at all. So here's my log in Heroku.
2020-04-17T13:20:43.077260+00:00 heroku[web.1]: State changed from crashed to starting
2020-04-17T13:20:46.867420+00:00 app[web.1]:
2020-04-17T13:20:46.867435+00:00 app[web.1]: > edinburgh#1.0.0 start /app
2020-04-17T13:20:46.867435+00:00 app[web.1]: > node index.js
2020-04-17T13:20:46.867436+00:00 app[web.1]:
2020-04-17T13:20:47.375288+00:00 app[web.1]: Edinburgh is ready to go.
2020-04-17T13:21:45.447426+00:00 heroku[web.1]: State changed from starting to crashed
Can anyone help me? I have no way now to solve this by myself.
Seems like your application crashed.. a bit more details would be nice.
I assume it runs flawlessly when you just execute node index.js on your machine
Back when I used Heroku, it was always a pain in the ass to configure everything around dependency installing, this could be a reason why it crashed, however a JS error would make things much easier
Also, could you perhaps show your Procfile? I guess it doesn't include a lot but seeing it would be nice
Related
When i want to start my DDEV Project an Container stucks at creating
Container ddev-oszimt-lf12a-v2-db Started
Error Message:
Failed waiting for web/db containers to become ready: db container failed: log=, err=health check timed out after 2m0s: labels map[com.ddev.site-name:oszimt-lf12a-v2 com.docker.compose.service:db] timed out without becoming healthy, status=
Its an Error i also had with some other projects.
In the Error Log is no information about this.
What could the Problem be and how do i fix it?
This isn't a very good place to study problems with specific projects, our Discord Channel is much better, or the DDEV issue queue.
But I'll try to give you some ideas about how to study and debug this.
Go to the Troubleshooting section of the docs. Work through it step-by-step.
As it says there, try the simplest possible project and see what the results are.
If the problem is particular to one particular project, see if you can remove customizations like .ddev/docker-compose.*.yaml files and config.*.yaml and non-standard things in the config.yaml file.
To find out what the causes the healthcheck timeout, see the docs on this exact problem, in your case the db container is timing out. So first, ddev logs -s db to see if something happened, and second docker inspect --format "{{json .State.Health }}" ddev-<projectname>-db.
For more help, you'll need to provide more information with things like your OS, Docker Provider, etc, and the easiest way to do that is to run ddev debug test and capture the output and put it in a gist on gist.github.com, then come over to discord with a link to that.
I recently installed Heroku Redis. Until then, the app worked just fine. I am using Bull for queuing and ioredis as the redis library. I had connection issues initially but I have resolved that as I no longer get the error. However, this new Error described shows up.
Please check these details below;
Package.json Start Script
"scripts": {
"start": "sh ./run.sh"
}
run.sh file
node ./app/services/queues/process.js &&
node server.js
From the logs on the heroku console, I see this.
Processing UPDATE_USER_BOOKING... Press [ctrl C] to Cancel
{"level":"info","message":"mongodb connected"}
1 is my log in the process script. This tells me that the consumer is running and ready to process any data it receives.
2 Tells me that mongo is connected. It can be found in my server.js(entry file).
My challenge is after those 2 lines, it then shows this;
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
Stopping process with SIGKILL
Error waiting for process to terminate: No child processes
Process exited with status 22
State changed from starting to crashed
So, I don't know why this is happening even when I have the PORT sorted out already as described in their docs. See this for clarity:
app.listen(process.env.PORT || 4900, ()=>{})
Note: It was working before until I introduced the Redis bit just a day ago.
Could there be an issue with the way I am running both server and the Queue process in the package.json file? I have been reading answers similar to this, but they are usually focused on the PORT fix which is not my own issue as far as I know.
TroubleShooting : I removed the queue process from the start script and the issue was gone. I had this instead
"scripts": {
"start": "node server.js -p $PORT"
}
So it becomes clear that this line below;
node ./app/services/queues/process.js was the issue
Now, How then do I run this queue process script? I need it to run to listen to any subscription and then run the processor script. It works fine locally with the former start script.
Please Note: I am using Bull for the Queue. I followed this guide to implement it and it worked fine locally.
Bull Redis Implementation Nodejs Guide
I will appreciate any help on this as I am currently blocked on my development.
So I decided to go another way. I read up on how to run background jobs on heroku with Bull and I got a guide which I implemented. The idea is to utilize Node's concurrency API. For the guide a wrapper was used called throng to implement this.
I removed the process file and just wrapped my consumer script inside the start function and passed that to throng.
Link to heroku guide on enabling concurrency in your app
Result: I started getting EADDR in use Error which was because that app.listen() is being run twice..
Solution: I had to wrap the app.listen function inside a worker and pass it to throng and it worked fine.
Link to the solution to the EADDR in use Error
On my local Machine, I was able to push to the Queue and consume from it. After deploying to heroku, I am not getting any errors so far.
I have tested the update on heroku and it works fine too.
I hosted my bot on Heroku and I set all the configs (worker, token...)
When I try to turn it on I'm forced to use the console but, when I close it after typing the node index command, the bot goes offline.
App logs:
2019-04-21T11:52:21.580110+00:00 heroku[run.9063]: State changed from starting to up
2019-04-21T11:52:21.423708+00:00 heroku[run.9063]: Awaiting client
2019-04-21T11:52:21.721889+00:00 heroku[run.9063]: Starting process with command `node index`
2019-04-21T11:52:24.425348+00:00 heroku[run.9063]: Client connection closed. Sending SIGHUP to all processes
2019-04-21T11:52:24.962968+00:00 heroku[run.9063]: State changed from up to complete
2019-04-21T11:52:24.944749+00:00 heroku[run.9063]: Process exited with status 129
The bot goes offline because if you start it from the console the process is "bound" to that window: closing that window will also close the process.
To avoid these problems you can try making your dyno start the bot:
Go into your Procfile file and add the command you use to start the bot (both node and npm work) to your worker.
If you don't know that the Procifile is, please take a look at this article.
When you're done it should look something like this:
worker: node index
After that commit Procfile to your repo and push it to Heroku: you should see your dyno type in the "Resources" tab of your app. Make your that the dyno type you just added is the only active one.
(Why do I need to use the worker dyno?)
From now on, every time your app is deployed Heroku will run the command you entered as soon as the dyno is loaded. If you want to see the logs of your app you can either use "More" menu > View logs or, if you have the Heroku CLI installed on your computer, the following command:
heroku logs -a your-app-name-here --tail
I made a few changes to a Blade template - no changes to controllers, etc. - and confirmed that there are no errors locally.
I pushed the changes to Github and triggered a build and deploy of my Laravel application.
However, my application didn't start and now the logs read:
2019-01-14T16:41:22.580202+00:00 app[web.1]: DOCUMENT_ROOT changed to 'public/'
2019-01-14T16:41:22.656846+00:00 app[web.1]: Optimizing defaults for 1X dyno...
2019-01-14T16:41:22.690437+00:00 app[web.1]: 2 processes at 256MB memory limit.
2019-01-14T16:41:22.707069+00:00 app[web.1]: Starting php-fpm...
2019-01-14T16:41:23.935071+00:00 heroku[web.1]: State changed from starting to crashed
2019-01-14T16:41:23.815103+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2019-01-14T16:41:23.815215+00:00 heroku[web.1]: Stopping process with SIGKILL
2019-01-14T16:41:23.914103+00:00 heroku[web.1]: Process exited with status 137
I tried to restart the dynos to see if that would have an effect but it didn't. I did some searching on StackOverflow but couldn't find anything that was particularly helpful.
I do have a user.ini file with the 256MB memory limit set (as is reflected in the logs) but didn't make any changes to that.
I have not tried reverting my changes to the Blade template because I don't understand how that could lead to this boot timeout error.
The comment from #ceejayoz helped me figure out what was wrong. Reverting changes one by one led me to a fairly obvious issue that I was able to correct and redeploy without issue.
I'm currently trying to deploy Eremetic (version 0.28.0) on top of Marathon using the configuration provided as an example. I actually have been able to deploy it once, but suddenly, after trying to redeploy it, the framework stays inactive.
By inspecting the logs I noticed a constant attempt to connect to some service that apparently never succeeds because of some authentication problem.
2017/08/14 12:30:45 Connected to [REDACTED_MESOS_MASTER_ADDRESS]
2017/08/14 12:30:45 Authentication failed: EOF
It looks like the service returning an error is ZooKeeper and more precisely it looks like the error can be traced back to this line in the Go ZooKeeper library. ZooKeeper however seems to work: I've tried to query it directly with zkCli and to run a small Spark job (where the Mesos master is given with zk:// URL) and everything seems to work.
Unfortunately I'm not able to diagnose the problem further, what could it be?
It turned out to be a configuration problem. The master URL was simply wrong and this is how the error was reported.