How do I load and cache data into a bottle.py app that's accessible in multiple functions without using global variables? - caching

I have a bottle.py app that should load some data, parts of which get served depending on specific routes. (This is similar to memcached in principle, except the data isn't that big and I don't want the extra complexity.) I can load the data into global variables which are accessible from each function I write, but this seems less clean. Is there any way to load some data into a Bottle() instance during initialization?

You can do it by using bottle.default_app
Here's simple example.
main.py (used sample code from http://bottlepy.org/docs/dev/)
import bottle
from bottle import route, run, template
app = bottle.default_app()
app.myvar = "Hello there!" # add new variable to app
#app.route('/hello/<name>')
def index(name='World'):
return template('<b>Hello {{name}}</b>!', name=name)
run(app, host='localhost', port=8080)
some_handler.py
import bottle
def show_var_from_app():
var_from_app = bottle.default_app().myvar
return var_from_app

Related

How to initialize PYFMI models in parallel?

I am using pyfmi to do simulations with EnergyPlus. I recognized that initializing the individual EnergyPlus models takes quite some time. Therefore, I hope to find a way to initialize the models in parallel. I tried the python library multiprocessing with no success. If it matters, I am on Ubuntu 16.10 and use Python 3.6.
Here is what I want to get done in serial:
fmus = {}
for id in id_list:
chdir(fmu_path+str(id))
fmus[id] = load_fmu('f_' + str(id)+'.fmu',fmu_path+str(id))
fmus[id].initialize(start_time,final_time)
The result is a dictionary with ids as key and the models as value: {id1:FMUModelCS1,id2:FMUModelCS1}
The purpose is to call later the models by their key and do simulations.
Here is my attempt with multiprocessing:
def ep_intialization(id,start_time,final_time):
chdir(fmu_path+str(id))
model = load_fmu('f_' + str(id)+'.fmu',fmu_path+str(id))
model.initialize(start_time,final_time)
return {id:model}
data = ((id,start_time,final_time) for id in id_list)
if __name__ == '__main__':
pool = Pool(processes=cpus)
pool.starmap(ep_intialization, data)
pool.close()
pool.join()
I can see the processes of the models in my system monitor but then the script raise an error because the models are not pickable:
MaybeEncodingError: Error sending result: '[{id2: <pyfmi.fmi.FMUModelCS1 object at 0x561eaf851188>}]'. Reason: 'TypeError('self._fmu,self.callBackFunctions,self.callbacks,self.context,self.variable_list cannot be converted to a Python object for pickling',)'
But I cannot imagine that there is no way to initialize the models in parallel. Other frameworks/libraries than threading/multiprocessing are also welcome.
I saw this answer but it seems that it focuses on the simulations after initialization.
The answer below the one you refer to seems to explain what the problem with multiprocessing and FMU instantiation is.
I tried with pathos suggested in this answer, but run into the same problem:
from pyfmi import load_fmu
from multiprocessing import Pool
from os import chdir
from pathos.multiprocessing import Pool
def ep_intialization(id):
chdir('folder' + str(id))
model = load_fmu('BouncingBall.fmu')
model.initialize(0,10)
return {id:model}
id_list = [1,2]
cpus = 2
data = ((id) for id in id_list)
pool = Pool(cpus)
out = pool.map(ep_intialization, data)
This gives:
MaybeEncodingError: Error sending result: '[{1: <pyfmi.fmi.FMUModelME2 object at 0x564e0c529290>}]'. Reason: 'TypeError('self._context,self._fmu,self.callBackFunctions,self.callbacks cannot be converted to a Python object for pickling',)'
Here is another idea:
I suppose the instantiation is slow because EnergyPlus links plenty of libraries into the FMU. If the components you are modelling all have the same interface (input, output, parameters), you can probably use a single FMU with an additional parameter that switches between the models.
This would be much more efficient: You would only have to instantiate a single FMU and could call it in parallel with different parameters and inputs.
Example:
I have never worked with EnergyPlus, but maybe the following example will illustrate the approach:
You have three variants of a building and you are merely interested in the total heat flux over the entire surface area of buildings as a function of - "weather" (whatever that means - maybe a lot of variables).
Put all three buildings into a single EnergyPlus model and build an if or case clause around them (pseudo code):
if (id_building == 1) {
[model the building one]
elseif (if_building == 2) {
[model the building two]
[...]
Define the "weather" or whatever you need as an input variable for the FMU and define id_building also as a parameter. Define the overall heat flux as output variable.
This would allow you to choose the building before starting the simulation.
The two requirements are:
EnergyPlus Syntax allows if or case structures.
All your models work with the same interface (in our example we have weather as in and a flux as out variables)
There is a dirty workaround for the second requirement: Just define all the variables all your models need and only use what you need in the respective if block.

Using SimpleTemplate in Bottle

I'm new in Frameworks like Bottle and working through the Documentation/Tutorial.
Now I got a Problem with the Template-Engine:
I have a file called index.tpl in my folder views. (it's plain html)
When I use the following Code, it works to display my html:
from bottle import Bottle, SimpleTemplate, run, template
app = Bottle()
#app.get('/')
def index():
return template('index')
run(app, debug=True)
Now I want to implement this engine in my project and dont want to use template()
I want to use it as it stands in the documentation, like:
tpl = SimpleTemplate('index')
#app.get('/')
def index():
return tpl.render()
But if I do so, the Browser shows me just a white page with the word
index
written, instead of loading the template.
In the documentation, there is no further information on how I use this OO approach.
I just couldn't figure out why this happens and how I have to do it right...
Here's a nice, simple solution in the spirit of your original question:
tpl = SimpleTemplate(name='views/index.tpl') # note the change here
#app.get('/')
def index():
return tpl.render()

rails persist objects over requests in development mode

I am trying to interact with Matlab.Application.Single win32ole objects in my rails application. The problem I am running into is that while I am developing my application, each separate request reloads my win32ole objects so I loose the connection to my matlab orignal instances and new instances are made. Is there a way to persist live objects between requests in rails? or is there a way to reconnect to my Matlab.Application.Single instances?
In production mode I use module variables to store my connection between requests, but in development mode Module variables are reloaded every request.
here is a snippet of my code
require 'win32ole'
module Calculator
#engine2 = nil
#engine3 = nil
def self.engine2
if #engine2.nil?
#engine2 = WIN32OLE.new("Matlab.Application.Single")
#engine2.execute("run('setup_path.m')")
end
#engine2
end
def self.engine3
if #engine3.nil?
#engine3 = WIN32OLE.new("Matlab.Application.Single")
#engine3.execute("run('setup_path.m')")
end
#engine3
end
def self.load_CT_image(file)
Calculator.engine2.execute("spm_image('Init','#{file}')")
end
def self.load_MR_image(file)
Calculator.engine3.execute("spm_image('Init','#{file}')")
end
end
I am then able to use my code in my controllers like this:
Calculator.load_CT_image('Post_Incident_CT.hdr')
Calculator.load_MR_image('Post_Incident_MRI.hdr')
You can keep an app-wide object in a constant that won't be reset for every request. Add this to a new file in config/initializers/:
ENGINE_2 = WIN32OLE.new("Matlab.Application.Single")
You might also need to include the .execute("run('setup_path.m')") line here as well (I'm not familiar with WIN32OLE). You can then assign that object to your instance variables in your Calculator module (just replace the WIN32OLE.new("Matlab.Application.Single") call with ENGINE_2, or simply refer to them directly.
I know this is beyond the scope of your question, but you have a lot of duplicated code here, and you might want to think about creating a class or module to manage your Matlab instances -- spinning up new ones as needed, and shutting down old ones that are no longer in use.

Avoid repeated calls to an API in Jekyll Ruby plugin

I have written a Jekyll plugin to display the number of pageviews on a page by calling the Google Analytics API using the garb gem. The only trouble with my approach is that it makes a call to the API for each page, slowing down build time and also potentially hitting the user call limits on the API.
It would be possible to return all the data in a single call and store it locally, and then look up the pageview count from each page, but my Jekyll/Ruby-fu isn't up to scratch. I do not know how to write the plugin to run once to get all the data and store it locally where my current function could then access it, rather than calling the API page by page.
Basically my code is written as a liquid block that can be put into my page layout:
class GoogleAnalytics < Liquid::Block
def initialize(tag_name, markup, tokens)
super # options that appear in block (between tag and endtag)
#options = markup # optional optionss passed in by opening tag
end
def render(context)
path = super
# Read in credentials and authenticate
cred = YAML.load_file("/home/cboettig/.garb_auth.yaml")
Garb::Session.api_key = cred[:api_key]
token = Garb::Session.login(cred[:username], cred[:password])
profile = Garb::Management::Profile.all.detect {|p| p.web_property_id == cred[:ua]}
# place query, customize to modify results
data = Exits.results(profile,
:filters => {:page_path.eql => path},
:start_date => Chronic.parse("2011-01-01"))
data.first.pageviews
end
Full version of my plugin is here
How can I move all the calls to the API to some other function and make sure jekyll runs that once at the start, and then adjust the tag above to read that local data?
EDIT Looks like this can be done with a Generator and writing the data to a file. See example on this branch Now I just need to figure out how to subset the results: https://github.com/Sija/garb/issues/22
To store the data, I had to:
Write a Generator class (see Jekyll wiki plugins) to call the API.
Convert data to a hash (for easy lookup by path, see 5):
result = Hash[data.collect{|row| [row.page_path, [row.exits, row.pageviews]]}]
Write the data hash to a JSON file.
Read in the data from the file in my existing Liquid block class.
Note that the block tag works from the _includes dir, while the generator works from the root directory.
Match the page path, easy once the data is converted to a hash:
result[path][1]
Code for the full plugin, showing how to create the generator and write files, etc, here
And thanks to Sija on GitHub for help on this.

why decorate Jinja2 instances with #webapp2.cached_property

The webapp2 site (http://webapp-improved.appspot.com/api/webapp2_extras/jinja2.html) has a tutorial on how to use webapp2_extras.jinja2, and the code is below.
My question is: why cache the webapp2_extras.jinja2.Jinja2 instance return by return jinja2.get_jinja2(app=self.app)? I checked the code of #webapp2.cached_property and found that it caches the Jinja2 instance in a instance of BaseHandler, which will be destroyed after the request, so why bother to cache it? Did I miss something here?
import webapp2
from webapp2_extras import jinja2
class BaseHandler(webapp2.RequestHandler):
#webapp2.cached_property
def jinja2(self):
# Returns a Jinja2 renderer cached in the app registry.
return jinja2.get_jinja2(app=self.app)
def render_response(self, _template, **context):
# Renders a template and writes the result to the response.
rv = self.jinja2.render_template(_template, **context)
self.response.write(rv)
Here you can find the documentation about cached_property.
The BaseHandler class will be later on called often. My understanding is that to avoid the overhead of calling jinja2.get_jinja2(app=self.app) each time, such reference is evaluated the first time only, and then returned many times later on, i.e. every time a view is called.
To see this happen in code, see this example, where each view is derived from the same BaseHandler class.

Resources