My application is reloading due to Werkseug's reload on code change feature. I would like to disable this, in production I am running Gunicorn.
gunicorn -b 0.0.0.0:5000 \
--workers 12 \
--log-level "${LOGGING_LEVEL}" \
--preload "wsgi:create_app('${FLASK_ENV:-development}')"
--timeout 240
I know running "flask run --no-reload" will disable the reload, but how would this be used with Gunicorn?
I think that flask activates auto reload based on its environment.
Maybe change FLASK_ENV:-development to FLASK_ENV:-production, or just remove it from your command.
Related
docker run -it --name mc3 dockerhub:5000/bitnami/minio-client
08:05:31.13
08:05:31.14 Welcome to the Bitnami minio-client container
08:05:31.14 Subscribe to project updates by watching https://github.com/bitnami/containers
08:05:31.14 Submit issues and feature requests at https://github.com/bitnami/containers/issues
08:05:31.15
08:05:31.15 INFO ==> ** Starting MinIO Client setup **
08:05:31.16 INFO ==> ** MinIO Client setup finished! ** mc: Configuration written to /.mc/config.json. Please update your access credentials.
mc: Successfully created /.mc/share.
mc: Initialized share uploads /.mc/share/uploads.json file.
mc: Initialized share downloads /.mc/share/downloads.json file.
**mc: /opt/bitnami/scripts/minio-client/run.sh is not a recognized command. Get help using --help flag.
dockerhub:5000/bitnami/minio-client - name of the image
It would be great if someone reach out to help me how to solve this issue as I'm stuck here for more than 2 days
MinIO has two components:
Server
Client
The Server runs continuously, as it should, so it can serve the data.
On the other hand the client, which you are trying to run, is used to perform operations on a running server. So its expected for it to run and then immediately exit as its not a daemon and its not meant to run forever.
What you want to do is to first launch the server container in background (using -d flag)
$ docker run -d --name minio-server \
--env MINIO_ROOT_USER="minio-root-user" \
--env MINIO_ROOT_PASSWORD="minio-root-password" \
minio/minio:latest
Then launch the client container to perform some operation, for example making/creating a bucket, which it will perform on the server and exit immidieatly after which it will clean up the client container (using -rm flag).
$ docker run --rm --name minio-client \
--env MINIO_SERVER_HOST="minio-server" \
--env MINIO_SERVER_ACCESS_KEY="minio-root-user" \
--env MINIO_SERVER_SECRET_KEY="minio-root-password" \
minio/mc \
mb minio/my-bucket
For more information please checkout the docs
Server: https://min.io/docs/minio/container/operations/installation.html
Client: https://min.io/docs/minio/linux/reference/minio-mc.html
we have a problem setting up aws-sigv4 and connecting an AWS AMP workspace via docker images.
TAG: grafana/grafana:7.4.5
Main problem is that in the UI the sigv4 configuration screen does not appear.
Installing grafana:7.4.5 locally via Standalone Linux Binaries works.
Just setting the environment variables,
export AWS_SDK_LOAD_CONFIG=true
export GF_AUTH_SIGV4_AUTH_ENABLED=true
the configuration screen appears.
Connecting and querying data to AMP via corresponding IAM instance role is working flawlessly.
Doing the same in the docker image as ENV Variables does NOT work.
When using grafana/grafana:sigv4-web-identity it works, but it seems to me that this is just a "test image".
How to configure the default grafana image in order to enable sigV4 authentication?
It works for me:
$ docker run -d \
-p 3000:3000 \
--name=grafana \
-e "GF_AUTH_SIGV4_AUTH_ENABLED=true" \
-e "AWS_SDK_LOAD_CONFIG=true" \
grafana/grafana:7.4.5
You didn't provide minimal reproducible example, so it's hard to say what is a problem in your case.
Use variable GF_AWS_SDK_LOAD_CONFIG instead of AWS_SDK_LOAD_CONFIG.
This is driving me crazy:
I created a new .Net Core Web App from VS2019, adding support for docker (linux containers).
Everything works fine: if I start the debugger from VS the image is built, the container is started and the web app is available at http://localhost:32772/weatherforecast.
Then I clean it all up, and try to build and run manually:
docker build -t webapp2 --file webapplication2/Dockerfile .
docker run --name webapp2 -p 5000:5000 -t webapp2
(or even docker run --name webapp2 -p 5000:5000 -e "ASPNETCORE_ENVIRONMENT=Development" -t webapp2)
Build runs successfully, and (apparently) run command works fine too:
But...surprise...This way I cannot reach the app anymore (at http://localhost:5000/weatherforecast)!
Tried almost anything, use internal ip address from inspect, changing ports and run commands, adding -e "ASPNETCORE_URLS=https://+:443;http://+:80", nothing seems to work.
So the question is: what kind of magic we have behind the VS debug command?
I tried to see what's there but I don't see anything useful:
docker run -dt -v "C:\Users\carlo\vsdbg\vs2017u5:/remote_debugger:rw" -v "C:\Progetti\prove\docker\API\WebApplication2:/app" -v "C:\Progetti\prove\docker\API:/src/" -v "C:\Users\carlo\.nuget\packages\:/root/.nuget/fallbackpackages2" -v "C:\Program Files\dotnet\sdk\NuGetFallbackFolder:/root/.nuget/fallbackpackages" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -e "ASPNETCORE_LOGGING__CONSOLE__DISABLECOLORS=true" -e "ASPNETCORE_ENVIRONMENT=Development" -e "NUGET_PACKAGES=/root/.nuget/fallbackpackages2" -e "NUGET_FALLBACK_PACKAGES=/root/.nuget/fallbackpackages;/root/.nuget/fallbackpackages2" -P --name WebApplication2 --entrypoint tail webapplication2:dev -f /dev/null
Thanks!
Passing the port to docker run doesn't somehow override the port he application is running on. All you're saying is that you want port 5000 on the container exposed as port 5000 on the network. However, you app is running on 80, so that buys you nothing. You'd need -p 80:5000.
The ASPNETCORE_URLS environment variable is just way to configure the URLs of your app, which in a container is going to bind to https://+:443;http://+:80 by default. Setting the environment variable to the same thing again does nothing. You could do something like http://+:5000, which would then change the internal port to 5000 instead of 80, and then your original docker run command would have worked, because there's something actually running on port 5000.
I deployed a Flask application to a VPS, and using Gunicorn as a web server.
And I running the Gunicorn server using this command:
gunicorn --bind=0.0.0.0 run:app --access-logfile '-'
With the command I can see the log running. But after I closed my terminal session, I want to see the running logs again.
In Heroku I can use heroku logs -t to do that, any similar way to see it on Gunicorn..?
You need to set up the supervisor. Supervisor keeps your server running mode and saves your log. setup the supervisor file below and then you can see the logs:
[program:your_project_name]
command=/home/your_virualenv/bin/gunicorn --log-level debug
run_apiengine:main_app --bind 0.0.0.0:5007 --workers 2 --worker-class gevent
directory=your_project_directory
stdout_logfile= your_log_folder_path/supervisor_stdout.log
stderr_logfile= your_log_folder_path/supervisor_stderr.log
user=your_user
autostart=true
PYTHONPATH="$PYTHONPATH:your_python_path";OAUTHLIB_INSECURE_TRANSPORT='1';
I start rabbitmq on docker with command:
docker run -d --hostname my-rabbit --name rabbit-fox -p 5672:5672 -p 8090:15672 rabbitmq:3-management
it runs fine and i can log into console, but later on Chrome browser i get this:
and can not use then console in the browser.
Clearing browser's cache & memory did the work in my case.
After facing the same issue these steps performed:
I tried to re-run the docker, even I got to the point I re-installed the RabbitMQ server image without any result.
It simply was solved when I cleared the browser's Cache & Memory