Idle dyno still using free hours - heroku

As in topic, looks like my free dyno (I have only 1 dyno on my heroku account) uses free dyno hours when idle. Checked it and below you can see:
[user#host ~]$ heroku ps -a myapp
Free dyno hours quota remaining this month: 421h 16m (76%)
Free dyno usage for this app: 128h 43m (23%)
For more information on dyno sleeping and how to upgrade, see:
https://devcenter.heroku.com/articles/dyno-sleeping
=== web (Free): npm start (1)
web.1: up 2022/04/23 23:03:34 +0200 (~ 1m ago)
[user#host ~]$ heroku ps -a myapp
Free dyno hours quota remaining this month: 394h 53m (71%)
Free dyno usage for this app: 155h 5m (28%)
For more information on dyno sleeping and how to upgrade, see:
https://devcenter.heroku.com/articles/dyno-sleeping
=== web (Free): npm start (1)
web.1: idle 2022/04/23 23:36:32 +0200 (~ 15h ago)
[user#host ~]$ date
Sun Apr 24 15:32:04 CEST 2022
Which basically says that at 2022/04/23 23:03:34 +0200 I had 421h 16m remaining
Now Sun Apr 24 15:32:04 CEST 2022 I have 394h 53m remaining even though app is idle since 2022/04/23 23:36:32 +0200 so it was running for 30 more minutes without receiving web traffic...
Any idea what happened with all these dyno hours then? Or do I misunderstand how this works?

Related

Avoid waiting for user when checking the Apache Tomcat status

As part of a bash script I check the recently installed Apache Tomcat status with
sudo systemctl status tomcat
The output is as follows
● tomcat.service
Loaded: loaded (/etc/systemd/system/tomcat.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2023-01-30 16:25:48 UTC; 3min 9s ago
Process: 175439 ExecStart=/opt/tomcat/bin/startup.sh (code=exited, status=0/SUCCESS)
Main PID: 175447 (java)
Tasks: 30 (limit: 4546)
Memory: 253.0M
CPU: 9.485s
CGroup: /system.slice/tomcat.service
└─175447 /usr/lib/jvm/java-1.11.0-openjdk-amd64/bin/java -Djava.util.logging.config.file=/opt/tomcat/conf/logging.properties -Djava.uti>
Jan 30 16:25:48 vps-06354c04 systemd[1]: Starting tomcat.service...
Jan 30 16:25:48 vps-06354c04 startup.sh[175439]: Tomcat started.
Jan 30 16:25:48 vps-06354c04 systemd[1]: Started tomcat.service.
Jan 30 16:25:48 vps-06354c04 systemd[1]: /etc/systemd/system/tomcat.service:1: Assignment outside of section. Ignoring.
Jan 30 16:25:48 vps-06354c04 systemd[1]: /etc/systemd/system/tomcat.service:2: Assignment outside of section. Ignoring.
This is the info I expect to see, but after printing it, systemctl keeps waiting for the user to type a key, breaking the automation I expect to deliver.
How can I avoid this behaviour?
I'm pretty sure the --no-pager option would keep that from happening. I just confirmed that on my own system on a different service. Otherwise, it goes interactive.
I don't recall ever seeing systemctl status asking for input, so perhaps it's the sudo used in this command doing that, in which case you could ask your system administrator to enable passwordless sudo on the account that runs this command.
A general solution for automating user input in shell scripts is to use expect, but for a simple case where you only need to send a single value one time, you can often get by with using echo and piping the value to the command (e.g., echo 'foo' | sudo systemctl status tomcat), although you should never do this to pass sensitive information such as passwords because that will potentially be accessible to other users on that system.

Too long response from my app deployed on Heroku

I'm very new in using Heroku.
I deployed my telegram-bot using free plan on Heroku to test how it works. But it works great only first 2-3 minutes, after that period of all responses from the app take up to 6 hours per each!
In documentation: app will sleep after 30 minutes of no-requests, but I can't find any info about requests take 5-6 hours for response by using free plan.
I tried autodeployment and manual deployment, same situation.
Source code is on GitHub.
Deployment on local machine works great 24/7.
Thank you in advance!
UPD:
It tryes to bind to some $PORT after 90 secs of starting app. Have no idea what does it mean.
After crushing heroku restarts app, sometimes restarts are succeed, sometimes are not.
Logs:
Mar 09 20:43:37 javamethodabot app/api Build succeeded
Mar 09 20:44:49 javamethodabot heroku/web.1 Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 90 seconds of launch
Mar 09 20:44:49 javamethodabot heroku/web.1 Stopping process with SIGKILL
Mar 09 20:44:49 javamethodabot heroku/web.1 Process exited with status 137
Mar 09 20:44:50 javamethodabot heroku/web.1 State changed from starting to crashed
Mar 09 20:44:50 javamethodabot heroku/web.1 State changed from crashed to starting

Heroku workers crashing in laravel app if number of workers > 1

I've been using Heroku to host my application for several years and just started running into issues with the worker queue getting backlogged. I was hoping I could fix this by increasing the number of workers running so queued jobs could be completed in parallel, but whenever I scale up my number of workers, all but one crash.
Here's my Procfile:
web: vendor/bin/heroku-php-apache2 public
worker: php /app/artisan queue:restart && php /app/artisan queue:work redis --tries=3 --timeout=30
Here's the output from my sever logs when I scale up my workers to anything greater than 1 (in this example, it was just scaling it to 2 workers):
Mar 16 06:04:51 heroku/worker.1 Starting process with command `php /app/artisan queue:restart && php /app/artisan queue:work redis --tries=3 --timeout=30`
Mar 16 06:04:52 heroku/worker.1 State changed from starting to up
Mar 16 06:04:54 app/worker.1 Broadcasting queue restart signal.
Mar 16 06:04:58 heroku/worker.2 Process exited with status 0
Mar 16 06:04:58 heroku/worker.2 State changed from up to crashed
Mar 16 06:04:58 heroku/worker.2 State changed from crashed to starting
Mar 16 06:05:09 heroku/worker.2 Starting process with command `php /app/artisan queue:restart && php /app/artisan queue:work redis --tries=3 --timeout=30`
Mar 16 06:05:10 heroku/worker.2 State changed from starting to up
Mar 16 06:05:14 app/worker.2 Broadcasting queue restart signal.
Mar 16 06:05:19 heroku/worker.1 Process exited with status 0
Mar 16 06:05:19 heroku/worker.1 State changed from up to crashed
As you can see, both workers try starting but only worker.2 stays in the up status.
The crashed workers try restarting every 10 minutes to the same result as above.
When I run heroku ps, here's what I see:
=== worker (Standard-1X): php /app/artisan queue:restart && php /app/artisan queue:work redis --tries=3 --timeout=30 (2)
worker.1: crashed 2021/03/16 06:05:19 -0600 (~ 20m ago)
worker.2: up 2021/03/16 06:05:10 -0600 (~ 20m ago)
(my normal web dynos scale up and down just fine, so i'm not showing that in here).
Any thoughts as to what could be happening? My first thought was that there was an issue going on with Heroku, but I realized that wasn't the case. My second thought is that my Procfile entry for my worker could be causing problems, but I don't know enough about that entry to know what could be the cause.
Again, this has been working fine for 1 worker for a long time and the crashing only happens when I try to scale up to more than 1 worker. Regardless of how many workers I try scaling to, only one doesn't crash and remains active and able to receive and process jobs.
Misc info:
Heroku stack: Heroku-18
Laravel version: 8.*
Queue driver: Redis
Update - I scaled up the dynos on my staging environment and was able to scale the workers up and down without any kind of crashes. Now I'm thinking there might be some kind of add-on conflict or something else going on. I'll update this if I find anything else out (already reached out to Heroku support).
The problem was the php /app/artisan queue:restart command in the procfile. The workers starting and the restart command being called were causing conflicting signals and eventually caused all but one of the workers to crash.
I took out that command and I can scale my workers without issue now.
=== worker (Standard-1X): php /app/artisan queue:work redis --queue=high,default,sync,emails,cron --tries=3 --timeout=30 (2)
worker.1: up 2021/03/17 17:29:32 -0600 (~ 8m ago)
worker.2: up 2021/03/17 17:35:58 -0600 (~ 2m ago)
When a deployment is made to Heroku, the dynos receive a SIGTERM signal which kills any lingering processes and then the dynos are restarted. This means the php /app/artisan queue:restart command was redundant and unnecessary.
The main confusion came in the way Laravel worded the information about queue workers needing a restart here: https://laravel.com/docs/8.x/queues#queue-workers-and-deployment. This is necessary on servers where the dynos aren't handled the way Heroku does.

custom systemd service can't start on Ubuntu 18.04

and thanks in advance for any assistance
I run original QT wallets (command-line based) for various cryptocurrencies. Earlier this year, I set them up as a custom systemd service, and that has been invaluable. It starts them up and shuts them down with the system just like all the normal services. I recently discovered an issue with one in particular, blackcoin.
This service worked fine in the past (I don't know how long it was down for before I found it)
If I run the command after execstart= command manually, everything works just fine. If I try to start the service (via systemctl start blackcoin), it fails with the following service status:
blackcoin.service - blackcoin wallet daemon
Loaded: loaded (/etc/systemd/system/blackcoin.service; enabled; vendor preset: enabled)
Active: failed (Result: core-dump) since Tue 2018-11-20 10:44:01 MST; 2h 51min ago
Process: 12272 ExecStart=/usr/bin/blackcoind -datadir=/coindaemon-rundirectory/blackcoin/ -conf=/coindaemon-rundirectory/blackcoin/blackcoin.conf -daemon (code=exited, status=0/SUCCESS)
Main PID: 12283 (code=dumped, signal=ABRT)
Nov 20 10:44:01 knox systemd[1]: blackcoin.service: Service hold-off time over, scheduling restart.
Nov 20 10:44:01 knox systemd[1]: blackcoin.service: Scheduled restart job, restart counter is at 5.
Nov 20 10:44:01 knox systemd[1]: Stopped blackcoin wallet daemon.
Nov 20 10:44:01 knox systemd[1]: blackcoin.service: Start request repeated too quickly.
Nov 20 10:44:01 knox systemd[1]: blackcoin.service: Failed with result 'core-dump'.
Nov 20 10:44:01 knox systemd[1]: Failed to start blackcoin wallet daemon.
Here is the body of the systemd service:
##################################################################
## Blackcoin Systemd service ##
##################################################################
[Unit]
Description=blackcoin wallet daemon
After=network.target
[Service]
Type=forking
User=somedude
RuntimeDirectory=blackcoind
PIDFile=/run/blackcoind/blackcoind.pid
Restart=on-failure
ExecStart=/usr/bin/blackcoind \
-datadir=/home/somedude/blackcoin/ \
-conf=/home/somedude/blackcoin/blackcoin.conf \
-daemon
ExecStop=/usr/bin/blackcoind \
-datadir=/home/somedude/blackcoin/ \
-conf=/home/somedude/blackcoin/blackcoin.conf \
stop
# Recommended hardening
# Provide a private /tmp and /var/tmp.
PrivateTmp=true
# Mount /usr, /boot/ and /etc read-only for the process.
ProtectSystem=full
# Disallow the process and all of its children to gain
# new privileges through execve().
NoNewPrivileges=true
# Use a new /dev namespace only populated with API pseudo devices
# such as /dev/null, /dev/zero and /dev/random.
PrivateDevices=true
# Deny the creation of writable and executable memory mappings.
MemoryDenyWriteExecute=true
[Install]
WantedBy=multi-user.target
And this is what blackcoin.conf contains:
rpcuser=somedude
rpcpassword=12345 (please don't rob my coins!)
# Wallets
wallet=wallet-blackcoin.dat
pid=/run/blackcoind/blackcoind.pid
rpcport=56111
port=56112
I'm going to keep testing and will post anything new that I find. Thanks for looking!

Apache won't start -- says httpd module is loaded but isn't running

So I've been working with several Virtual Hosts on OS X 10.8.2. I'm using the Apache2 installation and MySQL to run name-based virtual hosts. They have all been working perfectly fine until last night. Suddenly, all of my virtual hosts redirect to a "Cannot connect to" page.
After fiddling around and eventually checking the error logs, I've concluded that Apache is NOT actually running. For example, ps aux | grep apache only returns the grep process. However, if I try sudo /usr/sbin/apachectl start I get "org.apache.httpd: Already loaded" in response.
I've checked my httpd.conf file and it looks perfectly fine. I can't see any changes to it. I also ran the syntax check command (which escapes my brain at the exact moment), and it returned OK. The only thing I found in my error logs, the last thing, was from yesterday, Feb 21, and it says: "[Thu Feb 21 21:46:02 2013] [notice] caught SIGTERM, shutting down"
Ever since then, my Apache errors logs contain nothing (because it's not running). I've restarted, tried restarting apache; I'm at a total loss as to why it thinks it's running even though it is not.
Any ideas?
In /var/logs/system.log when I try to start and restart Apache:
Feb 23 09:27:00 Baileys-MacBook-Pro com.apple.launchd[1] (org.apache.httpd[8766]): Exited with code: 1
Feb 23 09:27:00 Baileys-MacBook-Pro com.apple.launchd[1] (org.apache.httpd): Throttling respawn: Will start in 10 seconds
Feb 23 09:27:10 Baileys-MacBook-Pro com.apple.launchd[1] (org.apache.httpd[8767]): Exited with code: 1
Feb 23 09:27:10 Baileys-MacBook-Pro com.apple.launchd[1] (org.apache.httpd): Throttling respawn: Will start in 10 seconds
Feb 23 09:27:16 Baileys-MacBook-Pro.local sudo[8769]: bailey : TTY=ttys000 ; PWD=/private/var/log ; USER=root ; COMMAND=/usr/sbin/apachectl start
Feb 23 09:27:20 Baileys-MacBook-Pro com.apple.launchd[1] (org.apache.httpd[8772]): Exited with code: 1
Feb 23 09:27:20 Baileys-MacBook-Pro com.apple.launchd[1] (org.apache.httpd): Throttling respawn: Will start in 10 seconds
Feb 23 09:27:20 Baileys-MacBook-Pro.local sudo[8773]: bailey : TTY=ttys000 ; PWD=/private/var/log ; USER=root ; COMMAND=/usr/sbin/apachectl restart
Feb 23 09:27:20 Baileys-MacBook-Pro com.apple.launchd[1] (org.apache.httpd[8777]): Exited with code: 1
Feb 23 09:27:20 Baileys-MacBook-Pro com.apple.launchd[1] (org.apache.httpd): Throttling respawn: Will start in 10 seconds
Feb 23 09:27:26 Baileys-MacBook-Pro.local sudo[8778]: bailey : TTY=ttys000 ; PWD=/private/var/log ; USER=root ; COMMAND=/usr/bin/vi system.log
This problem persists after rebooting. Ever since the other day, it will not start but believes the httpd module is loaded.
I'm trying to find out via Google, but -- does anyone know how Apache checks if it's loaded? I know a lot of services lock files to run; is it possible Apache has a lock file somewhere that's still locked despite Apache not currently running?
NOTE: I've posted this on ServerFault, as well -- I'm posting this here as well because so far I'm not getting anything on ServerFault and I've been looking at Apache posts on StackOverflow, so I'm assuming Apache questions are fine for Stack.
I can reproduce the issue (kinda) by starting Apache when there's another process already listening on the same port that Apache wants to bind to (usually that's port 80). So check if there's perhaps another process listening on that port:
sudo lsof -i tcp:80 | grep LISTEN
EDIT: Perhaps easier: you can start Apache manually in debug mode to see what the reason is it won't start:
sudo /usr/sbin/httpd -k start -e Debug -E /dev/stdout
In my case (something already listening on port 80), it will produce:
(48)Address already in use: make_sock: could not bind to address 0.0.0.0:80
In my case I got:
(2)No such file or directory: httpd: could not open error log file
/private/var/log/apache2/error_log. Unable to open logs
Creating the directory apache2 made it running.
Do not know if this is relevant, but since I faced the same problem and I found an alternate solution, let me put in my 2c anyway.
Looked into this post when I got the same issue. Turns out that the httpd.conf file was the culprit. I had changed it to install something. Although I removed the installer files, I forgot to change the httpd.conf back. I hope you did not face the same problem.
Regarding question on port 80, I had seen skype hog the port as well as 443, (God knows for what) and I had better results after I turned it off. Make sure you do no have skype running on port 80 .
robertklep's pointer:
sudo /usr/sbin/httpd -k start -e Debug -E /dev/stdout
solved a related problem for me. Same symptoms, different cause, I think.
I set up a test virtual host with SSL & a self-signed certificate.
I had generated a private key with a passphrase.
So httpd was waiting for a passphrase (which I wasn't supplying).
When I started with the debug option, I got the prompt, supplied the passphrase & httpd started up.
So, will redo the private key without a passphrase...

Resources