gevent-socketio + Flask + Gunicorn - socket.io

Can I use gevent-socketio with Flask, running under Gunicorn, and still enjoy the nice exception printing, debugger, and reload capability that Flask offers? How would my gunicorn worker and WSGI app class look like?

I've got the exact same problem so I solved it by using watchdog.
pip install watchdog
together with this command:
watchmedo shell-command --patterns="*.py*;;*.less;*.css;*.js;*.txt;*.html" --recursive --command='kill -HUP `cat /tmp/gunicorn.pid` && echo "Reloading code" >> /tmp/gunicorn.log' ~/projectfolder
It requires (well, not really, but I point "Reloading code" into the same logfile so It's a nice thing to have) that you daemonize the gunicorn process, which I do like this:
gunicorn_config.py
workers = 2
worker_class = 'socketio.sgunicorn.GeventSocketIOWorker'
bind = '0.0.0.0:5000'
pidfile = '/tmp/gunicorn.pid'
debug = True
loglevel = 'debug'
errorlog = '/tmp/gunicorn.log'
daemon = True
Start the application:
gunicorn run:app -c gunicorn-config.py
View the log:
tail -f /tmp/gunicorn.log
From this point everything should be reloaded with each change in your project.
It's a bit complicated but since gunicorn with a worker (or the built in socketio-server) doesn't have any reloading capabilities I had to do it like this.
It's a different approach compared to the decorator solution in the other answer but I like to keep the actual code clean from development specific solutions. Both accomplish the same thing so I guess you'll just have to pick the solution you like. :)
Oh, as an added bonus you get to use the production server in development which means both environments match each other.

I've been looking into this subject lately. I don't think you can easily use autoreload feature with Flask + gevent-socket.io + Gunicorn. Gunicorn is a production server that does not allow such features natively.
However, I found a nice solution for my development server : user SocketIOServer provided with the library and a Flask snippet for autoreload. Here is the startup script (runserver.py) :
from myapp import app
from gevent import monkey
from socketio.server import SocketIOServer
import werkzeug.serving
# necessary for autoreload (at least)
monkey.patch_all()
PORT = 5000
#werkzeug.serving.run_with_reloader
def runServer():
print 'Listening on %s...' % PORT
ws = SocketIOServer(('0.0.0.0', PORT), app, resource="socket.io", policy_server=False)
ws.serve_forever()
runServer()
This solution is inspired from : http://flask.pocoo.org/snippets/34/

I've made some tweaks to Werkzeug debugger so it now works with socket.io namespaces, see below and enjoy :)
https://github.com/aldanor/SocketIO-Flask-Debug

Related

D3, loading a csv file, filepath issue ? [duplicate]

I'm just learning d3, and I'm attempting to import data from a CSV file, but I keep getting the error "XMLHttpRequest cannot load file:///Users/Laura/Desktop/SampleECG.csv. Cross origin requests are only supported for HTTP. ". I've searched for how to fix this error and have ran it on a local web server, but I haven't found a solution that works for d3.v2.js. Here's a sample of the code:
var Time = []
ECG1 = []
d3.csv("/Desktop/d3Project/Sample.csv", function(data)
{
Time = data.map(function(d) {return [+d["Time"]];});
ECG1 = data.map(function(d) {return [+d["ECG1"]];});
console.log(Time)
console.log(ECG1)
});
Any help will be much appreciated.
This confused me too (I am also a d3 beginner).
So, for some reason, web browsers are not happy about you loading local data, probably for security reasons or something. Anyways, to get around this, you have to run a local web server. This is easy.
In your terminal, after cd-ing to your website's document root (thanks #daixtr), type:
python -m SimpleHTTPServer 8888 &
Okay, now as long as that terminal window is open and running, your local 8888 web server will be running.
So in my case, originally the web page I was working on was called
file://localhost/Users/hills/Desktop/website/visualizing-us-bls-data-inflation-and-prices.html
When I opened it in chrome. To open up my page on my local web server, I just typed (into the chrome search bar):
http://localhost:8888/Desktop/website/visualizing-us-bls-data-inflation-and-prices.html
Now, reading in CSVs should work. Weird, I know.
To those using built-in python webserver and who are still experiencing issues, do REMEMBER and make sure that you run the "python -m SimpleHTTPServer 8888" invocation at the correct path of which you consider to be your DocumentRoot. That is, you cannot just run 'python -m SimpleHTTPServer 8888' anywhere. You have to actually 'cd /to/correct/path/' containing your index.html or data.tsv and then from there run 'python -m SimpleHTTPServer 8888'.
Also, just learning D3 for school work. I was trying to run this simple D3 example:
https://gist.github.com/d3noob/b3ff6ae1c120eea654b5
I had the same problem as OP re: loading data using Chrome browser. I bet the great solution Hillary Sanders posted above was re: Python 2.X.
My answer is re: Python 3.X [OS: Ubuntu 16x]:
Open a terminal window within the root directory of your project, then run:
python3 -m http.server
It will serve HTTP on port 8000 by default unless it is already taken, in that case to open another port, e.g. 7800, run:
python3 -m http.server 7800
Then, on your Chrome browser address bar type:
localhost:8000
The above worked for me because I only had an index.html page in my root folder. In case, you have a HTML page with a different name, type the whole path to that local HTML page and it should work also. And, you should be able to see the graph created from the data set in my link (that must be in a folder like data/data.csv). I hope this helps. :-)
Use Firefox, idk what Chrome tries to accomplish

Flask with MongoDB using MongoKit to MongoLabs

I am a beginner and I have a simple application I have developed locally which uses mongodb with mongoKit as follows:
app = Flask(__name__)
app.config.from_object(__name__)
customerDB = MongoKit(app)
customerDB.register([CustomerModel])
then in views I just use the CustomerDB
I have put everything on heroku cloud but my database connection doesn't work.
I got the link I need to connect by:
heroku config | grep MONGOLAB_URI
but I am not sure how to pull this. I looked at the following post, but I am more confused
How can I use the mongolab add-on to Heroku from python?
Any help would be appreciated.
Thanks!
According to the documentation, Flask-MongoKit supports a set of configuration settings.
MONGODB_DATABASE
MONGODB_HOST
MONGODB_PORT
MONGODB_USERNAME
MONGODB_PASSWORD
The MONGOLAB_URI environment setting needs to be parsed to get each of these. We can use this answer to the question you linked to as a starting point.
import os
from urlparse import urlsplit
from flask import Flask
from flask_mongokit import MongoKit
app = Flask(__name__)
# Get the URL from the Heroku setting.
url = os.environ.get('MONGOLAB_URI', 'mongodb://localhost:27017/some_db_name')
# Parse it.
parsed - urlsplit(url)
# The database name comes from the path, minus the leading /.
app.config['MONGODB_DATABASE'] = parsed.path[1:]
if '#' in parsed.netloc:
# If there are authentication details, split the network locality.
auth, server = parsed.netloc.split('#')
# The username and password are in the first part, separated by a :.
app.config['MONGODB_USERNAME'], app.config['MONGODB_PASSWORD'] = auth.split(':')
else:
# Otherwise the whole thing is the host and port.
server = parsed.netloc
# Split whatever version of netloc we have left to get the host and port.
app.config['MONGODB_HOST'], app.config['MONGODB_PORT'] = server.split(':')
customerDB = MongoKit(app)

Disable Rack::CommonLogger without monkey patching

So, I want to have completely custom logging for my sinatra application, but I can't seem to disable the Rack::CommonLogger.
As per the sinatra docs all I should need to do is add the following line (tried setting it to false as well):
set :logging, nil
to my configuration. This does not work however, and I still receive the Apache-like log messages in my terminal. So the only solution I've found so far is to monkey patch the damn thing.
module Rack
class CommonLogger
def call(env)
# do nothing
#app.call(env)
end
end
end
Anyone got any ideas if it's possible to disable this without restorting to such matters?
I monkey patched the log(env, status, header, began_at) function, which is what gets called by rack to produce the apache style logs. We use jruby with logback so we have no use for all the custom logging that seems to pervade the ruby ecosystem. I suspect fishwife is initalizing the CommonLogger, which might explain why all my attempts to disable it or to configure it with a custom logger fail. Probably puma does something similar. I actually had two instances at one point. One logging with my custom logger (yay) and another one still doing its silly puts statements on stderr. I must say, I'm pretty appalled with the logging hacks in the rack ecosystem. Seems somebody needs a big cluebat to their heads.
Anyway, putting this in our config.ru works for us:
module Rack
class CommonLogger
def log(env, status, header, began_at)
# make rack STFU; our logging middleware takes care of this
end
end
end
In addition to that, I wrote my own logging middleware that uses slf4j with a proper MDC so we get more meaningful request logging.
Puma adds logging middleware to your app if you are in development mode and haven’t set the --quiet option.
To stop Puma logging in development, pass the -q or --quiet option on the command line:
puma -p 3001 -q
or if you are using a Puma config file, add quiet to it.
Rack includes a few middlewares by default when you rackup your application. It is defined in this file.
By default, as you mention, the logging middleware is enabled.
To disable it, just pass the option --quiet to rackup:
rackup --quiet
And the default loggin middleware will not be enabled.
Then, you can enable your own logging middleware without pollution by the default logger (named CommonLogger).
You can also add #\ --quiet to the top of your config.ru file to avoir always typing --quiet, but this behaviour is now deprecated.
It's probably not Sinatra what is writing to STDOUT or STDERR, but your webserver. Puma can be started with -q (quiet) option as noted by #matt. When using webrick make sure the AccessLog configuration variable is an empty array, otherwise it will also be logged on your standard output.
Disabling echo from webrick
This is one of the top results. So this probably more of a message to my future self the next time I'm annoyed to death about sinatra/puma not shutting up. But to actually get a silent start up:
class MyApp < Sinatra::Base
configure do
set :server, :puma
set :quiet, true
set :server_settings, Silent: true
end
end

Procfile gunicorn custom module name

Context:
I am writing a medium sized flask application (10-15 views), and in the process, I am hoping to organize the code in a manner that will make it easily maintainable and extensible (not a monolithic file as most Flask applications are).
The structure of the application mimics the documentation as follows:
/AwesomeHackings
/ENV
/AwesomeHackings
/models
/static
/templates
/__init__.py
/awesome.py
/awesome.cfg
/Procfile
/README.MD
/requirements.txt
/run.py
Problem:
I am unable to get foreman to work with a flask application which is not named 'app'. I would love to have run.py be the entry point to my application.
I am using gunicorn + gevent, and my current Procfile contains:
web: gunicorn -w 2 -b 0.0.0.0:$PORT -k gevent app:run
I have been using run.py to test the application:
from AwesomeHackings import awesome
awesome.app.run(debug=True)
Thus I assumed I could simply substitute run for app in the Procfile, but when executing foreman start , gunicorn fails with meaningless verbiage about modules.
I found the solution in Django's documentation. The main parameter of gunicorn is module:
gunicorn [OPTIONS] APP_MODULE
Where APP_MODULE is of the pattern MODULE_NAME:VARIABLE_NAME
While it seemed logical for the syntax to be a keyword argument app:someIdentifier, as all of the tutorials use a module named app, it is in fact not the case. The correct argument for my situation was run:app.

Job handler serialization incorrect when running delayed_job in production with Thin or Unicorn

I recently brought delayed_job into my Rails 3.1.3 app. In development
everything is fine. I even staged my DJ release on the same VPS as my
production app using the same production application server (Thin),
and everything was fine. Once I released to production, however, all
hell broke loose: none of the jobs were entered into the jobs table
correctly, and I started seeing the following in the logs for all
processed jobs:
2012-02-18T14:41:51-0600: [Worker(delayed_job host:hope pid:12965)]
NilClass# completed after 0.0151
2012-02-18T14:41:51-0600: [Worker(delayed_job host:hope pid:12965)] 1
jobs processed at 15.9666 j/s, 0 failed ...
NilClass and no method name? Certainly not correct. So I looked at the
serialized handler on the job in the DB and saw:
"--- !ruby/object:Delayed::PerformableMethod\nattributes:\n id: 13\n
event_id: 26\n name: memememe\n api_key: !!null \n"
No indication of a class or method name. And when I load the YAML into
an object and call #object on the resulting PerformableMethod I get
nil. For kicks I then fired up the console on the broken production
app and delayed the same job. This time the handler looked like:
"--- !ruby/object:Delayed::PerformableMethod\nobject: !ruby/
ActiveRecord:Domain\n attributes:\n id: 13\n event_id: 26\n
name: memememe\n api_key: !!null \nmethod_name: :create_a\nargs: []
\n"
And sure enough, that job runs fine. Puzzled, I then recalled reading
something about DJ not playing nice with Thin. So, I tried Unicorn and
was sad to see the same result. Hours of research later and I think
this has something to do with how the app server is loading the YAML
libraries Psych and Syck and DJ's interaction with them. I cannot,
however, pin down exactly what is wrong.
Note that I'm running delayed_job 3.0.1 official, but have tried upgrading to
the master branch and have even tried downgrading to 2.1.4.
Here are some notable differences between my stage and production
setups:
In stage I run 1 Thin server on a TCP port -- no web proxy in front
In production I run 2+ Thin servers and proxy to them with Nginx.
They talk over a UNIX socket
When I tried unicorn it was 1 app server proxied to by Nginx over a
UNIX socket
Could the web proxying/Nginx have something to do with it? Please, any insight is greatly appreciated. I've spent a lot of time
integrating delayed_job and would hate to have to shelve the work or, worse,
toss it. Thanks for reading.
I fixed this by not using #delay. Instead I replaced all of my "model.delay.method" code with custom jobs. Doing so works like a charm, and is ultimately more flexible. This fix works fine with Thin. I haven't tested with Unicorn.
I'm running into a similar problem with rails 3.0.10 and dj 2.1.4, it's most certainly a different yaml library being loaded when running from console vs from the app server; thin, unicorn, nginx. I'll share any solution I come up with
Ok so removing these lines from config/boot.rb fixed this issue for me.
require 'yaml'
YAML::ENGINE.yamler = 'syck'
This had been placed there to fix an YAML parsing error, forcing YAML to use 'syck'. Removing this required me to fix the underlying issues with the .yml files. More on this here
Now my delayed job record handlers match between those created via the server (unicorn in my case) and the console. Both my server and delayed job workers are kicked off within bundler
Unicorn
cd #{rails_root} && bundle exec unicorn_rails -c #{rails_root}/config/unicorn.rb -E #{rails_env} -D"
DJ
export LANG=en_US.utf8; export GEM_HOME=/data/reception/current/vendor/bundle/ruby/1.9.1; cd #{rail
s_root}; /usr/bin/ruby1.9.1 /data/reception/current/script/delayed_job start staging

Resources