I deployed a Flask application to a VPS, and using Gunicorn as a web server.
And I running the Gunicorn server using this command:
gunicorn --bind=0.0.0.0 run:app --access-logfile '-'
With the command I can see the log running. But after I closed my terminal session, I want to see the running logs again.
In Heroku I can use heroku logs -t to do that, any similar way to see it on Gunicorn..?
You need to set up the supervisor. Supervisor keeps your server running mode and saves your log. setup the supervisor file below and then you can see the logs:
[program:your_project_name]
command=/home/your_virualenv/bin/gunicorn --log-level debug
run_apiengine:main_app --bind 0.0.0.0:5007 --workers 2 --worker-class gevent
directory=your_project_directory
stdout_logfile= your_log_folder_path/supervisor_stdout.log
stderr_logfile= your_log_folder_path/supervisor_stderr.log
user=your_user
autostart=true
PYTHONPATH="$PYTHONPATH:your_python_path";OAUTHLIB_INSECURE_TRANSPORT='1';
Related
I am running a Flask/python app on a website in my terminal using:
gunicorn --workers 4 --bind :5000 app:app
It shows me the log but it eventually times out, even though the website is still up and running. Is there a way I can go back to viewing the log?
So far, I can only go back to seeing the log by running sudo reboot
for the Ubuntu server and then reopening the server with ssh and running the gunicorn code again.
I am trying to set up a development environment for Apache/AirFlow on MacBook with macOS 10.14.x.
I have installed docker, virtualbox and created virtual machine and created containers with web_server, worker, scheduler and redis, postgres.
I run :
docker-compose up -d
But, when I visited http://localhost:8080, I got:
This page isn’t working
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
In the docker-compose log file, I found:
[mwebserver_1 [INFO] Parent changed, shutting down: <Worker 34>
[mwebserver_1 [INFO] Worker exiting (pid: 34)
[mwebserver_1 {{cli.py:815}} ERROR - No response from gunicorn master within 120 seconds
[mwebserver_1 {{cli.py:816}} ERROR - Shutting down webserver
I am not sure what the problem could be.
Any suggestions would be appreciated.
After start the docker container's.
Try to run docker exec -it NAME_OF_CONTAINER /bin/bash. After that you're gonna into container bash and you can run airflow webserver.
I use a remote server to process beanstalkd queues that I would like to use on my application running on Heroku. Is there any way to run Supervisor to monitor that the queue command (Laravel: php artisan queue:listen) is running?
I am trying to setup tomcat 7 on Digital Ocean CoreOS machine but facing some problem, not sure how to solve them. I am following below tutorial provided by the Digital Ocean to setup Apache.
https://www.digitalocean.com/community/tutorials/how-to-create-and-run-a-service-on-a-coreos-cluster
I created docker container and run it using following command.
docker run -i -t ubuntu:14.04 /bin/bash
I was successfully able to install tomcat 7 by using below commands. (I followed this tutorial to setup tomcat 7 within the docker container: https://www.digitalocean.com/community/tutorials/how-to-install-apache-tomcat-7-on-ubuntu-14-04-via-apt-get)
sudo apt-get update
sudo apt-get install tomcat7
Then I can created service unit file named as tomcat#.service
[Unit]
Description=Tomcat 7 web server service
After=etcd.service After=docker.service
Requires=tomcat-discovery#%i.service
[Service]
TimeoutStartSec=0 KillMode=none
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/docker kill tomcat%i
ExecStartPre=-/usr/bin/docker rm tomcat%i
ExecStartPre=/usr/bin/docker pull attacomsian/tomcat
ExecStart=/usr/bin/docker run –name tomcat%i -p ${COREOS_PUBLIC_IPV4}:%i:8080 attacomsian/tomcat `service tomcat7 start` -D FOREGROUND
ExecStop=/usr/bin/docker stop tomcat%i
[X-Fleet]
X-Conflicts=tomcat#*.service
Then I created tomcat-discovery#.service to register service states with Etcd as below
[Unit]
Description=Announce Tomcat#%i service
BindsTo=tomcat#%i.service
[Service]
EnvironmentFile=/etc/environment
ExecStart=/bin/sh -c “while true; do etcdctl set /announce/services/tomcat%i ${COREOS_PUBLIC_IPV4}:%i –ttl 60; sleep 45; done”
ExecStop=/usr/bin/etcdctl rm /announce/services/tomcat%i
[X-Fleet]
X-ConditionMachineOf=tomcat#%i.service
I submitted and loaded files to Fleet as below
fleetctl submit tomcat#.service tomcat-discovery#.service
fleetctl load tomcat#8080.service
fleetctl load tomcat-discovery#8080.service
Everything worked fine so far. I did not see any error. But when I tried to run the service as below
fleetctl start tomcat#8080.service
But it did not started. I can see it is appearing as dead.
I am new to CoreOS and still learning. I am managing servers at Digital Ocean and I quite about it quite well. I googled about this issue but did not found any help. I personally think following line is actually causing the trouble.
ExecStart=/usr/bin/docker run –name tomcat%i -p ${COREOS_PUBLIC_IPV4}:%i:8080 attacomsian/tomcat `service tomcat7 start` -D FOREGROUND
I would really appreciate any kind help to get this up.
Many Thanks
Attacomsian
I was going so suggest you take a look at what others have done and then discovered you have posted a similar question on the Docker Hub registry.
Did you take a look at the Docker file used by the tutum/tomcat image?
https://github.com/tutumcloud/tutum-docker-tomcat/blob/master/7.0/Dockerfile
https://github.com/tutumcloud/tutum-docker-tomcat/blob/master/7.0/run.sh
It runs a script called "run.sh" that runs tomcat in the foreground.
The thing that is tricky to understand is that Docker is not a virtual machine and therefore does not have any services running. You must run the docker processes explicitly or setup a process manager like runit or supervisord.
Hope this helps.
I know how to sentry start.
But when I change the sentry.conf.py, how can I make it work?
I run sentry help and can not find sentry stop or restart commond.
Is there a way to restart the sentry server?
I just ran into this problem myself. I was using supervisor to start my sentry server, and for some reason it was not killing sentry when I stopped supervisor. To fix this, I ran sudo netstat -tulpn | grep 9000 to find the process id that was still running. For me, it was gunicorn. Kill that process then start the server again and your new settings should take effect.
I'm using systemctl to manage sentry.
Firstly, create an executable file. run_worker
#!/bin/bash
source ~/.sentry/bin/activate
SENTRY_CONF=~/sentry sentry run worker>/var/log/sentry_worker.log 2>&1
Then, create service files. just like:
[Service]
ExecStart={YourPath}/sentry/run_worker
Restart=always
StartLimitInterval=0
[Install]
WantedBy=default.target
Create sentry_web.service sentry_cron.service likewise and use
systemctl --user restart sentry_*
to restart.
If you are running your workers using supervisor, just run the commands to restart all the workers:
supervisorctl
restart all
Or if you want to restart single worker enter:
supervisorctl
status
to get the list of the workers and use:
restart worker_name
It will restart the sentry process and enable your new configurations.