I hosted my bot on Heroku and I set all the configs (worker, token...)
When I try to turn it on I'm forced to use the console but, when I close it after typing the node index command, the bot goes offline.
App logs:
2019-04-21T11:52:21.580110+00:00 heroku[run.9063]: State changed from starting to up
2019-04-21T11:52:21.423708+00:00 heroku[run.9063]: Awaiting client
2019-04-21T11:52:21.721889+00:00 heroku[run.9063]: Starting process with command `node index`
2019-04-21T11:52:24.425348+00:00 heroku[run.9063]: Client connection closed. Sending SIGHUP to all processes
2019-04-21T11:52:24.962968+00:00 heroku[run.9063]: State changed from up to complete
2019-04-21T11:52:24.944749+00:00 heroku[run.9063]: Process exited with status 129
The bot goes offline because if you start it from the console the process is "bound" to that window: closing that window will also close the process.
To avoid these problems you can try making your dyno start the bot:
Go into your Procfile file and add the command you use to start the bot (both node and npm work) to your worker.
If you don't know that the Procifile is, please take a look at this article.
When you're done it should look something like this:
worker: node index
After that commit Procfile to your repo and push it to Heroku: you should see your dyno type in the "Resources" tab of your app. Make your that the dyno type you just added is the only active one.
(Why do I need to use the worker dyno?)
From now on, every time your app is deployed Heroku will run the command you entered as soon as the dyno is loaded. If you want to see the logs of your app you can either use "More" menu > View logs or, if you have the Heroku CLI installed on your computer, the following command:
heroku logs -a your-app-name-here --tail
Related
I have a node running two jobs - they communicate with an external adaptor and then send the value on-chain.
One job works fine, which already tells me that the node can write on-chain.
The other job, receives the request, talks with the external adaptor (I have verified this on the external adaptor server) and then doesn't submit anything on-chain.
There is no way to debug this through the Operator UI. This is what it shows:
What should I do? I am running the Chainlink develop version because the most up-to-date stable version as a critical bug.
In the Chainlink node version 1.8.0, there are "Error" and "Runs" tabs in your node UI in the browser, and these 2 tabs allow you to view what's wrong with your job run. You can find the latest chainlink docker image here.
The error messages under the "error" tab are shown below, and the info can reflect the error your job encountered in the run.
If there are no "error" and "run" tabs in the browser or there is nothing shown in the UI, you can also find error info in the log file housed by the server running the Chainlink node. The default path of the Chainlink node log file is /chainlink/chainlink_debug.log, so you can log into the server that running the node and check the log for debugging.
Hope it helps.
I recently installed Heroku Redis. Until then, the app worked just fine. I am using Bull for queuing and ioredis as the redis library. I had connection issues initially but I have resolved that as I no longer get the error. However, this new Error described shows up.
Please check these details below;
Package.json Start Script
"scripts": {
"start": "sh ./run.sh"
}
run.sh file
node ./app/services/queues/process.js &&
node server.js
From the logs on the heroku console, I see this.
Processing UPDATE_USER_BOOKING... Press [ctrl C] to Cancel
{"level":"info","message":"mongodb connected"}
1 is my log in the process script. This tells me that the consumer is running and ready to process any data it receives.
2 Tells me that mongo is connected. It can be found in my server.js(entry file).
My challenge is after those 2 lines, it then shows this;
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
Stopping process with SIGKILL
Error waiting for process to terminate: No child processes
Process exited with status 22
State changed from starting to crashed
So, I don't know why this is happening even when I have the PORT sorted out already as described in their docs. See this for clarity:
app.listen(process.env.PORT || 4900, ()=>{})
Note: It was working before until I introduced the Redis bit just a day ago.
Could there be an issue with the way I am running both server and the Queue process in the package.json file? I have been reading answers similar to this, but they are usually focused on the PORT fix which is not my own issue as far as I know.
TroubleShooting : I removed the queue process from the start script and the issue was gone. I had this instead
"scripts": {
"start": "node server.js -p $PORT"
}
So it becomes clear that this line below;
node ./app/services/queues/process.js was the issue
Now, How then do I run this queue process script? I need it to run to listen to any subscription and then run the processor script. It works fine locally with the former start script.
Please Note: I am using Bull for the Queue. I followed this guide to implement it and it worked fine locally.
Bull Redis Implementation Nodejs Guide
I will appreciate any help on this as I am currently blocked on my development.
So I decided to go another way. I read up on how to run background jobs on heroku with Bull and I got a guide which I implemented. The idea is to utilize Node's concurrency API. For the guide a wrapper was used called throng to implement this.
I removed the process file and just wrapped my consumer script inside the start function and passed that to throng.
Link to heroku guide on enabling concurrency in your app
Result: I started getting EADDR in use Error which was because that app.listen() is being run twice..
Solution: I had to wrap the app.listen function inside a worker and pass it to throng and it worked fine.
Link to the solution to the EADDR in use Error
On my local Machine, I was able to push to the Queue and consume from it. After deploying to heroku, I am not getting any errors so far.
I have tested the update on heroku and it works fine too.
I've uploaded a single file to Heroku that crawls a website and responds the edited content in JSON format on an http request. I now want to update the content regularly so the content stays up to date. I tried using the Heroku Scheduler however I am failing to schedule the process so that it runs correctly.
I have specified the following process in the Heroku Scheduler:
run phantomjs phantom.js //Using 1X Dyno, every hour.
//phantom.js is the file that contains my source code and that runs the server.
However if I enter
heroku ps
into the terminal, I only see one web dyne running and no scheduler task. Also if I type
heroku logs --ps scheduler.1
as described in the Scheduler documentation, there is no output.
What am I doing wrong? Any help would be appreciated.
For what it sounds like you want to accomplish, you need to be constantly running
1 Web Dyno
1 Background Worker
When your scheduled task executes, it will be run by the background worker. Which, since you haven't provisioned it, isn't executing.
Found it: I only had to write
phantomjs phantom.js
in order to get it working. It was the "run" that made the expression invalid
I'm trying to start a service via a script that I run through cron. Basically it just does this
/local/services/servicename status
/local/services/servicename stop
/local/services/servicename start
The service starts fine if I run the commands myself, but whenever I run it via the script, and I check for the service status manually, its response is always
Servicename Service is not running.
I am truly confuse right now. Any particular reason why a bash script wouldn't be able to start the services?
Not really an answer to your specific question, but definitely a useful tip for debugging cron behavior in general. Cron sends email messages to the user it runs as. If you run that in the root crontab, run the mail command in a terminal as root and you'll see the cron messages in your inbox. Check them by typing the message number (or man mail to learn how to use it).
I have a decently large DB that I'm trying to pull down locally from heroku via db:pull.
I never can stick around my machine long enough to keep it from going to sleep, effectively killing the connection and terminating the process. GOTO 1.
I know I could change my system settings to stop my computer from sleeping, which would keep the connection alive, but is there a way to continue a previous pull?
Or maybe the solution is just not to use db:pull for a large db.
heroku db:pull supports resuming. When you start a pull it will create a .dat file in your project (and get rid of it when it's completed). You can do:
heroku db:pull --resume FILE # resume transfer described by a .dat file
to start the pull from the previous location.
Heroku pgbackups maybe a better option to grab the large Db file - http://devcenter.heroku.com/articles/pgbackups.
Although I'd be more inclined to prevent your computer from sleeping - just disable the sleep functionality during the downloading from settings/control panel depending on OS.