How to store an equation in EEPROM? - pic

I'm working with embedded systems. For the sake of explanation, I'm working with a dsPIC33EP and a simple serial EEPROM.
Suppose I'm building a controller that uses a linear control scheme (y=mx+b). If the controller needs may different setting It's easy, store the m and the b in EEPROM and retrieve it for the different settings.
Now suppose I want to have different equations for different settings. I would have to pre program all the equations and then have a method for selecting that equation and pulling the settings from the EEPROM. It's harder because you need to know the equations ahead of time but still doable.
Now suppose that you don't know the equations ahead of time. Maybe you have to do a piece wise approximation for example. How could you store something like that in memory? That all a controller has to do is feed it a sensor reading and it would give back a control variable. Kind of like passing a variable to a function and getting the answer passed back.
How could you store a function like that in memory if only the current state is important?
How could you store a function like that if past states are important (if the control equation is second, third or fourth order for example)?

The dsPICs have limited RAM, but quite a bit of FLASH, enough for a small, but effective text parser. Have you thought of using some form of text based script? These can be translated to a more efficient data format at run-time.

Related

Matrix multiplier with Chisel

I want to describe a Matrix multiplier with Chisel, but there are some things that I do not understand.
First, I found that response giving the code of a 3X5 matrix multiplier. I would like to generalize it for any square matrix up to 128X128. I know that in Chisel, I can parameterize a module by giving a size parameter to the module (so that I'll use n.W instead of a defined size).
But at the end of the day, a Verilog file will be generated right ? So the parameters have to be fixed ? I probably confuse some things. My prupose is to adapt the code to be able to perform any matrix multiplication up to 128x128, and I do not know if it is technically possible.
The advantage of chisel is that everything can be parameterized. That being said at the end of the day when you are making your physical hardware obviously the parameter should be fixed. The advantage of making it parameterized is that if you don't know your exact requirements (like area of the die available etc) you can have a parameterized version ready and when the time comes you plug in the values you need and generate the verilog file for that parameter. And to answer your question, yes it is possible to perform any matrix multiplication up to 128x128 (or beyond if your Laptop RAM is sufficient). You get the verilog only when you create a Hardware driver this tells you how to create verilog from chisel so go ahead and create your parameterized hardware.

GALSIM: Using interpolated image class without specifying flux normalisation

For a project I am undertaking, I will need to calculate the derivative of a given surface brightness profile which is convolved with a pixel response function (as well as PSF etc.)
For various reasons, but mainly for consistency, I wish to do this using the guts of the GALSIM code. However, since in this case the `flux' defined as the sum of the non-parametric model no longer has a physical meaning in terms of the image itself (it will always be considered as noise-free in this case), there are certain situations where I would like to be able to define the interpolated image without a flux normalisation.
The code does not seem to care if the 'flux' is negative, but I am coming across certain situations where the 'flux' is within machine precision of zero, and thus the assertion ``dabs(flux-flux_tot) <= dabs(flux_tot)'' fails.
My question is therefore: Can one specify a non-parametric model to interpolate over without specifying a flux normalisation value?
There is currently no way to do this using the galsim.InterpolatedImage() class; you could open an issue to make this feature request at the GalSim repository on GitHub.
There is a way to do this using the guts of GalSim; an example is illustrated in the lensing power spectrum functionality if you are willing to dig into the source code (lensing_ps.py -- just do a search for SBInterpolatedImage to find the relevant code bits). The basic idea is that instead of using galsim.InterpolatedImage() you use the associated C++ class, galsim._galsim.SBInterpolatedImage(), which is accessible in python. The SBInterpolatedImage can be initialized with an image and choices of interpolants in real- and Fourier-space, as shown in the examples in lensing_ps.py, and then queried using the xValue() method to get the value interpolated to some position.
This trick was necessary in lensing_ps.py because we were interpolating shear fields which tend to have a mean of zero, so we ran into the same problem you did. Use of the SBInterpolatedImage class is not generally recommended for GalSim users (we recommend using only the python classes) but it is definitely a way around your problem for now.

When to store quaternion vs matrix in static and dynamic objects (data structure design)

My question is about design and possible suggestions for the following scenario:
I am writing a 3d visualizer. For my renderable objects I would like to store the minimum data possible (so quaternions are naturally nice for rotation).
At some point I must extract a Matrix for rendering which requires computation and temporary storage on every frame update (even for objects that do not change spatially).
Given that many objects remain static and don't need to be rotated locally would it make sense to store the matrix instead and thereby avoid the computation for each object each frame? Is there any best practice approach to this perhaps from a game engine design point of view?
I am currently a bit torn between storing the two extremes of either position+quaternion or 4x3/4x4 matrix. Looking at openframeworks (not necessarily trying to achieve the same goal as me), they seem to do a hybrid where they store a quaternion AND a matrix (matrix always reflects the quaternion) so its always ready when needed but needs to be updated along with every change to the quaternion.
More compact storage require 3 scalars, so Euler Angels or Exponential Maps (Rodrigues) can be used. Quaternions is good compromise between conversion to matrix speed and compactness.
From design point of view , there is a good rule "make all design decisions as LATE as possible". In your case, just incapsulate (isolate) the rotation (transformation) representation, to be able in the future, to change the physical storage of data in different states (file, memory, rendering and more). Also it enables different platform optimization, keep data in GPU or CPU and more.
Been there.
First: keep in mind the omnipresent struggle of time against space (in computer science processing time against memory requirements)
You said that want to keep minimum information possible at first (space), and next talked about some temporary matrix reflecting the quartenions, which is more of a time worry.
If you accept a tip, I would go for the matrices. They are generally performance wise standard for 3D graphics and it's size becomes easily irrelevant next to the object data itself.
Just to have and idea: in most GPUs transforming an vector for the identity (no change) is actually faster then checking if it needs transformation and then doing nothing.
As for engines, I can't think of one that does not apply the transformations for every vertex every frame. Even if the objects keep in place, they position has to go through projection and view matrices.
(does this answer? Maybe I got you wrong)

Techniques for handling arrays whose storage requirements exceed RAM

I am author of a scientific application that performs calculations on a gridded basis (think finite difference grid computation). Each grid cell is represented by a data object that holds values of state variables and cell-specific constants. Until now, all grid cell objects have been present in RAM at all times during the simulation.
I am running into situations where the people using my code wish to run it with more grid cells than they have available RAM. I am thinking about reworking my code so that information on only a subset of cells is held in RAM at any given time. Unfortunately the grids (or matrices if you prefer) are not sparse, which eliminates a whole class of possible solutions.
Question: I assume that there are libraries out in the wild designed to facilitate this type of data access (i.e. retrieve constants and variables, update variables, store for future reference, wipe memory, move on...) After several hours of searching Google and Stack Overflow, I have found relatively few libraries of this sort.
I am aware of a few options, such as this one from the HSL mathematical library: http://www.hsl.rl.ac.uk/specs/hsl_of01.pdf. I'd prefer to work with something that is open source and is written in Fortran or C. (my code is mostly Fortran 95/2003, with a little C and Python thrown in for good measure!)
I'd appreciate any suggestions regarding available libraries or advice on how to reformulate my problem. Thanks!
Bite the bullet and roll your own?
I deal with too-large data all the time, such as 30,000+ data series of half-hourly data that span decades. Because of the regularity of the data (daylight savings changeovers a problem though) it proved quite straightforward to devise a scheme involving a random-access disc file and procedures ReadDay and WriteDay that use a series number, and a day number, with further details because series start and stop at different dates. Thus, a day's data in an array might be Array(Run,DayNum) but now is ReturnCode = ReadDay(Run,DayNum,Array) and so forth, the codes indicating presence/absence of that day's data, etc. The key is that a day's data is a convenient size, and a regular (almost) size, and although my prog. allocates a buffer of one record per series, it runs in ~100MB of memory rather than GB.
Because your array is non-sparse, it is regular. Granted that a grid cell's data are of fixed size, you could devise a random-access disc file with each record holding one cell, or, perhaps a row's worth of cells (or a column's worth of cells) or some worthwhile blob size. I choose to have 4,096 bytes/record as that is the disc file allocation size. Let the computer's operating system and disc storage controller do whatever buffering to real memory they feel up to. Typical execution is restricted to the speed of data transfer however, unless the local data's computation is heavy. Thus, I get cpu use of a few percent until data requests start being satisfied from buffers.
Because fortran uses the same syntax for functions as for arrays (unlike say Pascal), instead of declaring DIMENSION ARRAY(Big,Big) you would remove that and devise FUNCTION ARRAY(i,j), and all read references in your source file stay as they are. Alas, in the absence of a "palindromic" function declaration, assignments of values to your array will have to be done with a different syntax and you devise a subroutine or similar. Possibly a scratchpad array could be collated, worked upon with convenient syntax, and then written back if changed.

Implementing a model written in a Predicate Calculus into ProLog, how do I start?

I have four sets of algorithms that I want to set up as modules but I need all algorithms executed at the same time within each module, I'm a complete noob and have no programming experience. I do however, know how to prove my models are decidable and have already done so (I know Applied Logic).
The models are sensory parsers. I know how to create the state-spaces for the modules but I don't know how to program driver access into ProLog for my web cam (I have a Toshiba Satellite Laptop with a built in web cam). I also don't know how to link the input from the web cam to the variables in the algorithms I've written. The variables I use, when combined and identified with functions, are set to identify unknown input using a probabilistic, database search for best match after a breadth first search. The parsers aren't holistic, which is why I want to run them either in parallel or as needed.
How should I go about this?
I also don't know how to link the
input from the web cam to the
variables in the algorithms I've
written.
I think the most common way for this is to use the machine learning approach: first calculate features from your video stream (like position of color blobs, optical flow, amount of green in image, whatever you like). Then you use supervised learning on labeled data to train models like HMMs, SVMs, ANNs to recognize the labels from the features. The labels are usually higher level things like faces, a smile or waving hands.
Depending on the nature of your "variables", they may already be covered on the feature-level, i.e. they can be computed from the data in a known way. If this is the case you can get away without training/learning.

Resources