Why heroku app doesn't save new file with Tiddlywiki? - heroku

Have anyone met an issue like me before?
I have deployed TiddlyWiki5 to heroku as an app at https://jameswiki.herokuapp.com. It displayed and worked as expected at runtime. However, after the server (web dyno) sleeping and wakeup (often after 1 hour inactive), everything is clear.
I have checked my console in heroku when creating new Tiddle, it still said new Tiddle has been saved, but in fact, no new tiddle is saved to Tiddlers folder. Below is my script to install and run it:
In Package.json
{
...
"scripts": {
"start": "tiddlywiki . --server",
"postinstall": "npm install -g tiddlywiki"
}
}
In Procfile
web: tiddlywiki . --server $PORT $:/core/save/all text/plain text/html "" "" 0.0.0.0
Help me to fix this issue. Thanks.

Heroku's filesystem is ephemeral - it exists only while that dyno exists. When the dyno restarts or ends (as it does when the app goes to sleep), the new one will have a fresh, empty filesystem. If you want files to persist, you need to save them off to something like a database or Amazon S3 for long-term storage.
https://devcenter.heroku.com/articles/dynos#isolation-and-security
Each dyno gets its own ephemeral filesystem, with a fresh copy of the most recently deployed code. During the dyno’s lifetime its running processes can use the filesystem as a temporary scratchpad, but no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted. For example, this occurs any time a dyno is replaced due to application deployment and approximately once a day as part of normal dyno management.

Related

Can't save a temporary csv file to /home/app directory on heroku using R shiny app [duplicate]

I published my first simple app on Heroku with a free dyno. This app writes a simple .txt file, that seems to be correctly written because my API services are working fine.
But if I try to check this file by entering in the file system using "heroku run bash -a MYAPP", I can't see that file in the folder I thought to see. It is like the file is not existing. Can someone tell me why?
Thanks.
I found this on https://devcenter.heroku.com/articles/active-storage-on-heroku:
In addition, any files stored on disk will not be visible from one-off dynos such as a heroku run bash instance or a scheduler task because these commands use new dynos.
It is still not so clear to me, but at least I know it is a normal (but strange) behaviour of Heroku!

How to restart Laravel queue workers inside a docker container?

I'm working on a production docker compose to run my Laravel app. It has the following containers (amongst others):
php-fpm for the app
nginx
mysql
redis
queue workers (a copy of my php-fpm, plus supervisord).
deployment (another copy of my php-fpm, with a Gitlab runner installed inside it, as well as node+npm, composer etc)
When I push to my production branch, the gitlab runner inside the deployment container executes my deploy script, which builds all the things, runs composer update etc
Finally, my deploy script needs to restart the queue workers, which are inside the queue workers container. When everything is installed together on a VPS, this is easy: php artisan queue:restart.
But how can I get the deployment container to run that command inside the queue workers container?
Potential solutions
My research indicates basically that containers should not talk to each other, but if you must, I have found four possible solutions:
install SSH in both containers
share docker.sock with the deployment container so it can control other containers via docker
have the queue workers container monitor a directory in the filesystem; when it changes, restart the queue workers
communicate between the containers with a tiny http server in the queue workers container
I really want to avoid 1 and 2, for complexity and security reasons respectively.
I lean toward 3 but am concerned about wasteful resource usage spent monitoring the fs. Is there a really lightweight method of watching a directory with as many files as a Laravel install has?
4 seems slightly crazy but certainly do-able. Are there any really tiny, simple http servers I could install into the queue workers container that can trigger a single command when the deployment container hits an endpoint?
I'm hoping for other suggestions, or if there really is no better way than 3 or 4 above, any suggestions on how to implement either of those options.
Delete the existing containers and create new ones.
A container is fundamentally a wrapper around a single process, so this is similar to stopping the workers with Ctrl+C or kill(1), and then starting them up again. For background workers this shouldn't interrupt more than their current tasks, and Docker gives them an opportunity to finish what they're working on before they get killed.
Since the code in the Docker image is fixed, when your CI system produces a new image, you need to delete and recreate your containers anyways to run them with the new image. In your design, the "deployment" container needs access to the host's Docker socket (option #2) to be able to do anything Docker-related. I might run the actual build sequence on a different system and push images via a Docker registry, but fundamentally something needs to sudo docker-compose ... on the target system as part of the deployment process.
A simple Compose-based solution would be to give each image a unique tag, and then pass that as an environment variable:
version: '3.8'
services:
app:
image: registry.example.com/php-app:${TAG:-latest}
...
worker:
image: registry.example.com/php-worker:${TAG:-latest}
...
Then your deployment just needs to re-run docker-compose up with the new tag
ssh root#production.example.com \
env TAG=20210318 docker-compose up -d
and Compose will take care of recreating the things that have changed.
I believe #David Maze's answer would be the recommended way, but I decided to post what I ended up doing in case it helps anyone.
I took a different approach because I am running my CI script inside my containers instead of using a Docker registry & having the CI script rebuild images.
I could still have given the deploy container access to the docker.sock (option #2) thereby allowing my CI script to control docker (eg rebuild containers etc) but I wasn't keen on the security implications of that, so I ended up doing #3, with a simple inotifywait watching for a change in a special 'timestamp.txt' file I am modifying in my CI script. Because it's monitoring only a single file it's light on the CPU and is working well.
# Start watching the special directory so we know when to restart the workers.
SITE_DIR=/var/www/projectname/public_html
WATCH_DIR=/var/www/projectname/updated_at
while true
do
inotifywait -e create -e modify $WATCH_DIR
if [ $? -eq 0 ]
then
echo "Detected Site Code Change. Executing artisan queue:restart."
sudo -H -u www-data php $SITE_DIR/artisan queue:restart
fi
done
All the deploy script has to do to trigger a queue:restart is:
date > $WATCH_DIR/timestamp.txt

Prevent Heroku from starting a web dyno

I'd like to configure a Heroku app to run a scheduled task once per day. My source tree looks like this:
bin/myScript
Procfile
package.json
When I deploy the app, I see the following error:
2017-01-11T04:31:36.660973+00:00 app[web.1]: npm ERR! missing script: start
I believe this is because Heroku tries to spin up a web dyno. I don't have a web dyno, nor do I want one. So I created a Procfile with this line:
heroku ps:scale web=0
To prevent heroku from spinning up a web dyno. That didn't work. What else can I do to prevent my app from crashing upon deployment? Does it matter if the scheduled task is going to be run in a separate one-off Dyno anyway?
You should not have the line "heroku ps:scale web=0" in your Procfile.
Doing so tells heroku to create a process type called "heroku" that attempts to run the following command on any dyno instances instantiated for it: "ps:scale web=0". That would probably generate errors, and at any rate, is not what you intended.
Instead You should run "heroku ps:scale web=0" as a Heroku toolbelt CLI command (or do the equivalent from the Resources tab of the GUI, as you already did).
I think I found a fix: in the "Resources" tab of the GUI for the web, there is a list of dynos with on/off sliders next to them. I switched the web dyno slider to off, and now when I deploy there is no crash. Still, it's unclear to me why the Procfile line was insufficient.

Unable to troubleshoot heroku log message No web processes running

I pushed my flask app to Heroku and I'm trying to connect through my frontend for the first time. I was getting a 503 Error and did a heroku log which revealed
desc="No web processes running".
I double checked my Procfile, deleted it, saw that git was noticing the changes to the file, made sure there was no extension to the file, then recreated it and pushed it back to heroku and ran heroku ps:scale web=1 but I'm still getting
Couldn't find that formation.
Is there anything else I should try?
This is what I have inside the Procfile web: gunicorn manage:app. I'm creating the Procfile on TextEdit, could that be causing the issue?
This might be obvious to others, but it took me a while to figure it out. I was using TextEdit on my Mac to create the Procfile. I made sure to delete the extension after creating the file. Apparently, thats not good enough. I went into google drive and created a Procfile.txt then downloaded it to my apps root directory and removed the extension. Pushed the changes and ran heroku ps:scale web=1 and it worked.

Using PM2 with sails trigger a grunt error

I'm using sails on Heroku and I would like to optimize concurrency on 2X dynos by using pm2 to have 2 process running per dyno.
However I'm facing a quite annoying problem as when I start the app with pm2 start app.js, the first process run fine but the second one trigger a grunt error :
Aborted due to warnings.
2015-09-14T10:41:41.897208+00:00 app[web.3]: Running "clean:dev" (clean) task
2015-09-14T10:41:41.897209+00:00 app[web.3]: Cleaning .tmp/public...ERROR
2015-09-14T10:41:41.897211+00:00 app[web.3]: Warning: Unable to delete ".tmp/public" file (ENOTEMPTY, directory not empty '.tmp/public/images').
Does anyone encountered this problem ? It's quite annoying as for the moment I pay a 2X dyno to use only one processor...
Thank you
It can't be achieve by running multiple instance at single folder because Sails will modify, concat, minify, etc. files at assets folder and place it to .tmp folder. So if you do run multimple instance, grunt process will conflict. If you still want to do it, achieve by running on multiple folders, copy paste your project folder to any other folder.

Resources