GEKKO - General - custom reusable flowsheet object - chemical process flowsheet modelling - gekko

No problems to speak of and nor am I currently a user. I am seeing advice on the best implementation practice for flowsheet models. Is there a framework to create custom flowsheet objects in GEKKO/chemical? Is the flowsheet module a mature and equal feature of GEKKO?
I am dealing with a number of applications which would benefit from the ability to inherit flowsheet objects from a yet to be developed custom library, if possible. One such item could be a tubular reactor as described here where it is solved in COMSOL (http://umich.edu/~elements/5e/web_mod/radialeffects/unsteady/index1.htm). Scenarios could involve several unit operations connected in series with recycle streams such as mixer settlers in solvent extraction which also has multiple liquid phases (organic and aqueous). It is worth nothing that all of the models would be of the unsteady state type.
I appreciate the thoughts of the user group in this respect.

Gekko doesn't currently allow black-box models where the equations are not available for requesting information such as first and second derivatives in sparse form. For that reason, a model in COMSOL wouldn't be a good fit for Gekko. If you would like to try to model the same PDE in Gekko, that is a possibility. Here are some PDE applications that may help give you inspiration:
Solid Oxide Fuel Cell
Parabolic and Hyperbolic PDEs Solved with Gekko
The Chemicals library is somewhat limited but it does have some thermodynamic data and basic reactor types. You could put many lumped parameter reactors in series to emulate a Plug Flow Reactor but it may be better to just write out the PDE equations. You may want to write out your own equations instead of relying on the Chemicals library.

Related

How to validate my YOLO model trained on custom data set?

I am doing my research regarding object detection using YOLO although I am from civil engineering field and not familiar with computer science. My advisor is asking me to validate my YOLO detection model trained on custom dataset. But my problem is I really don't know how to validate my model. So, please kindly point me out how to validate my model.
Thanks in advance.
I think first you need to make sure that all the cases you are interested in (location of objects, their size, general view of the scene, etc) are represented in your custom dataset - in other words, the collected data reflects your task. You can discuss it with your advisor. Main rule - you label data qualitatively in same manner as you want to see it on the output. more information can be found here
It's really important - garbage in, garbage out, the quality of output of your trained model is determined by the quality of the input (labelled data)
If this is done, it is common practice to split your data into training and test sets. During model training only train set is used, and you can later validate the quality (generalizing ability, robustness, etc) on data that the model did not see - on the test set. It's also important, that this two subsets don't overlap - than your model will be overfitted and the model will not perform the tasks properly.
Than you can train few different models (with some architectural changes for example) on the same train set and validate them on the same test set, and this is a regular validation process.

linearmodels or statsmodels - what are the main differences?

Can anyone explain the different between statsmodels and linearmodels. They are both very similar with respect to many things, but I assume they must also differ?
Does anyone have any insights to share?
linearmodels has mostly models that are not (yet) available in statsmodels especially models for panel data, multivariate or system models and some instrumental variable models.
There is some overlap in functionality, for example generalized method of moments, GMM in linearmodels is for specific linear models, while GMM in statsmodels is designed for general nonlinear GMM with some linear models as special cases.
The author of linearmodels is also one of the main maintainers of statsmodels.
There are some smaller differences in design and style that came from different preferences by the authors of the two packages or because statsmodels handles a much larger and heterogeneous set of models and classes.

PyMC: Hidden Markov Models

How suitable is PyMC in its currently available versions for modelling continuous emission HMMs?
I am interested in having a framework where I can easily explore model variations, without having to update E- and M-step, and dynamic programming recursions for every change I make to the model.
More specific questions are:
When modelling an HMM in PyMC can I answer the 'typical' tasks that one would like to solve -- i.e., besides parameter estimation also infer the most likely sequence (as usually done with the Viterbi algorithm), or solve a smoothing problem?
As compared to an implementation with Expectation Maximization, I would expect a sampling based approach to be slower. If that gives me more flexibility on the model building side, that is fine. I would imagine using PyMC for prototyping models. I am wondering though, if I can expect PyMC to handle inference for models with > 10k observations to finish in any reasonable amount of time.
Would you recommend starting out with PyMC2 or PyMC3 for model building. I know that the inference engine changed between the version, so I would especially wonder what type of sampler might be more suited.
If you'ld think PyMC is not a good choice for my use case, that definitely helps as an answer as well.

Is this a viable MapReduce use case or even possible to execute?

I am new to Hadoop, MapReduce, Big Data and am trying to evaluate it's viability for a specific use case that is extremely interesting to the project that I am working on. I am not sure however if what I would like to accomplish is A) possible or B) recommended with the MapReduce model.
We essentially have a significant volume of widgets (known structure of data) and pricing models (codified in JAR files) and what we want to be able to do is to execute every combination of widget and pricing model to determine the outcomes of the pricing across the permutations of the models. The pricing models themselves will examine each widget and determine pricing based on the decision tree within the model.
This makes sense from a parallel processing on commodity infrastructure perspective in my mind but from a technical perspective I do not know if it's possible to execute external models within the MR jobs and from a practical perspective whether or not I am trying to force a use case into the technology.
The question therefore becomes is it possible; does it make sense to implement in this fashion; and if not what are other options / patterns more suited to this scenario?
EDIT
The volume and variety will grow over time. Assume for the sake of discussion here that we have a terabyte of widgets and 10s of pricing models currently. We would then expect to gro into multiple terabytes and 100s of pricing models and that the execution of the permutations would would happen frequently as widgets change and/or are added and as new categories of pricing models are introduced.
You certainly need a scalable, parallel-izable solution and hadoop can be that. You just have to massage your solution a bit so it would fit into the hadoop world.
First, You'll need to make the models and widgets implement common interfaces (speaking very abstractly here) so that you can apply and arbitrary model to an arbitrary widget without having to know anything about the actual implementation or representation.
Second, you'll have to be able to reference both models and widgets by id. That will let you build objects (writables) that hold the id of a model and the id of a widget and would thus represent one "cell" in the cross product of widgets and models. You distribute these instances across multiple servers and in doing so distribute the application of models to widgets across multiple servers. These objects (call it class ModelApply) would hold the results of a specific model-to-widget application and can be processed in the usual way with hadoop to repost on best applications.
Third, and this is the tricky part, you need to compute the actual cross product of models to widgets. You say the number of models (and therefore model id's) will number in at most the hundreds. This means that you could load that list of id's into memory in a mapper and map that list to widget id's. Each call to the mapper's map() method would pass in a widget id and would write out one instance of ModelApply for each model.
I'll leave it at that for now.

Implementing a model written in a Predicate Calculus into ProLog, how do I start?

I have four sets of algorithms that I want to set up as modules but I need all algorithms executed at the same time within each module, I'm a complete noob and have no programming experience. I do however, know how to prove my models are decidable and have already done so (I know Applied Logic).
The models are sensory parsers. I know how to create the state-spaces for the modules but I don't know how to program driver access into ProLog for my web cam (I have a Toshiba Satellite Laptop with a built in web cam). I also don't know how to link the input from the web cam to the variables in the algorithms I've written. The variables I use, when combined and identified with functions, are set to identify unknown input using a probabilistic, database search for best match after a breadth first search. The parsers aren't holistic, which is why I want to run them either in parallel or as needed.
How should I go about this?
I also don't know how to link the
input from the web cam to the
variables in the algorithms I've
written.
I think the most common way for this is to use the machine learning approach: first calculate features from your video stream (like position of color blobs, optical flow, amount of green in image, whatever you like). Then you use supervised learning on labeled data to train models like HMMs, SVMs, ANNs to recognize the labels from the features. The labels are usually higher level things like faces, a smile or waving hands.
Depending on the nature of your "variables", they may already be covered on the feature-level, i.e. they can be computed from the data in a known way. If this is the case you can get away without training/learning.

Resources