What is the proper way of integrating a function in parts using boost odeint? - c++11

I have some code that generates an acoustic timeseries using a 2nd order 1 dimensional differential equation.
I decided to use boost::odeint to integrate the equations, but because this is being used ina real-time simulation, I need to generate a small number of steps repeatedly.
The code is of the following form:
class Integrator
{
public:
typedef std::vector<double> state_type;
//This is how boost::odeint wants it to look
/* NOTE: We have to cast our second order diff eq into a set of first-order diff eqs
because ODEINT only handles first order ODEs.
*/
void operator() ( const state_type &x , state_type &dxdt , const double t )
{
dxdt[0] = x[1];
//... Bunch of calculations here to determine dxdt and ddxdt
dxdt[1] = ddxdtTerm;
}
state_type x_state;
state_type dxdt_state;
std::vector<double> x_buffer;
std::vector<double> time_buffer;
double time;
};
struct IntegrationObserver
{
std::vector<double> & xVals;
std::vector<double> & tVals;
IntegrationObserver(std::vector<double> & in_xVals,
std::vector<double> & in_tVals) :
xVals(in_xVals),
tVals(in_tVals)
{
}
void operator()( const std::vector<double> &x , double t )
{
xVals.push_back( x[0] );
tVals.push_back( t );
}
};
I then have an acoustics generation thread that generates 100ms of acoustic data once every tenth of a second (at 22050 Hz):
std::vector<Integrator> g_intVec;
void audioBufferGeneratorThread()
{
std::cout << "Hello from the audio buffer generator thread!" << std::endl;
Uint32 lastTime = SDL_GetTicks();
Uint32 thisTime = lastTime;
size_t acousticBufsize = 0.1 * 22050;
float* acousticWorkBuffer = new float[acousticBufsize];
memset(acousticWorkBuffer, 0, acousticBufsize * sizeof(float));
while(!g_terminating)
{
bool didWork = false;
//Check time elapsed, have we seen 0.10 seconds pass?
thisTime = SDL_GetTicks();
double t_add = (thisTime - lastTime) / 1000.0;
//For each integrator
if(t_add >= 0.1){
t_add = 0.1;
lastTime = thisTime;
didWork = true;
{
std::lock_guard<std::mutex> guard(g_integratorMutex);
int bubIdx = 0;
for(auto integrator : g_intVec)
{
//Only generate a maximum 1 second of acoustics for each integrator...
//Main thread deletes integrators once they reach
if(integrator.t_sec >= 1.0)
{
continue;
}
size_t steps;
boost::numeric::odeint::runge_kutta4<BubbleIntegrator::state_type> stepper;
//Generate some audio
steps = boost::numeric::odeint::integrate_const(stepper,
integrator,
integrator.x_state,
integrator.t_sec,
integrator.t_sec + t_add,
1.0/22050.,
integratorObserver(integrator.x_buffer, integrator.time_buffer );
integrator.t_sec += t_add;
bubIdx++;
}
//Sum all the audio together
g_integratorMutex.unlock();
}
{
// std::lock_guard<std::mutex> guard(g_audioBufferMutex);
//Push the audio to the audio buffer
}
memset(acousticWorkBuffer, 0, acousticBufsize * sizeof(float));
}
//Yield the thread if we didn't generate any acoustics.
else
{
std::this_thread::yield();
}
}
delete acousticWorkBuffer;
std::cout << "Goodbye from the audio buffer generator thread..." << std::endl;
}
As one can see, this code is meant to store the previous X and DXDT states, as well as the current time, so that the integrator can pick up where it left off later.
However, the problem appears to be that the integrator is calculating from t=0 every time, so each advance through the acoustics generator loop takes longer and longer until the code is no longer able to generate the acoustics in realtime (i.e. generate 100ms of acoustics in under 100ms of code time).
It appears that I'm using the odeint library incorrectly, but I am unable to find a proper example of how to do what I'm trying to do here: generate the first 100ms of data, then the next 100ms, then the next, etc...
So my question is thus:
How can I use boost::odeint to generate sequential chunks of the equation without it starting from t=0 each time?

Related

How to perform integrals on an arduino

I am new to arduino and I am trying to make a program that calculates the percentage of charge remaining in a battery, using the coulomb countig method (below a picture with the formula). Is it possible to perform this type of calculation from an arduino?
1.) The Model
Assuming CBat is the capacity of a battery and constant (physicist prefer letter Q for electrical charge; CBat may decrease with aging, "State of Health")
By Definition:
SOC(t) = Q(t)/CBat
Differential Equation:
dQ/dt = I(t)
Approximation: "dt=1"
Q(t)-Q(t-1) ~= I(t)
Or
SOC(t) ~= SOC(t-1) + I(t)/Cbat
2.) Ardunio:
Following is a pure virtual script, online compiler would not serve.
// Assume current coming from serial port
const float C_per_Ah = 3600;
// signal adjustment
const float current_scale_A = 0.1; // Ampere per serial step
const int current_offset = 128; // Offset of serial value for I=0A
float CBat_Ah = 94; /* Assumed Battery Capacity in Ah */
float Cbat_C = CBat_Ah * C_per_Ah; /* ... in Coulomb */
float SOC = 0.7; /* Initial State of Charge */
int incomingByte = current_offset; // for incoming serial data, initial 0 Ampere
float I = 0; /* current */
// the setup routine runs once when you press reset:
void setup()
{
Serial.begin(9600); // opens serial port, sets data rate to 9600 bps
}
// the loop routine runs over and over again forever:
void loop()
{
delay(1000); // wait for a second
if (Serial.available() > 0)
{
// read the incoming byte:
incomingByte = Serial.read();
}
I = (incomingByte-current_offset) * current_scale_A;
SOC = SOC + I/Cbat_C;
Serial.print("New SOC: ");
Serial.println(SOC, 4);
}

Extracting patterns from time-series

I have a time-series, which essentially amounts to some instrument recording the current time whenever it makes a "detection". The sampling rate is therefore not in constant time, however we can treat it as such by "re-sampling", relying on the fact that the detections are made reliably and we can simply insert 0's to "fill in" the gaps. This will be important later...
The instrument should detect the "signals" sent by another, nearby instrument. This second instrument emits a signal at some unknown period, T (e.g. 1 signal per second), with a "jitter" likely on the order of a few tenths of a percent of the period.
My goal is to determine this period (or frequency, if you like) using only the timestamps recorded by the "detecting" instrument. Unfortunately, however, the detector is flooded with noise, and a significant amount (I estimate 97-98%) of "detections" (and therefore "points" in the time-series) are due to noise. Therefore, extracting the period will require more careful analysis.
My first thought was to simply feed the time series into an FFT algorithm (I'm using FFTW/DHT), however this wasn't particularly enlightening. I've also tried my own (admittedly rather crude) algorithm, which simply computed a cross-correlation of the series with "clean" series of increasing period. I didn't get very far with this, either, and there are quite a handful of details to consider (phase, etc).
It occurs to me that something like this must've been done before, and surely there's a "nice" way to accomplish it.
Here's my approach. Given a period, we can score it using a dynamic program to find the subsequence of detection times that includes the first and last detection and maximizes the sum of gap log-likelihoods, where the gap log-likelihood is defined as minus the square of the difference of the gap and the period (Gaussian jitter model).
If we have approximately the right period, then we can get a very good gap sequence (some weirdness at the beginning and end and wherever there is a missed detection, but this is OK).
If we have the wrong period, then we end up with basically exponential jitter, which has low log-likelihood.
The C++ below generates fake detection times with a planted period and then searches over periods. Scores are normalized by a (bad) estimate of the score for Poisson noise, so wrong periods score about 0.4. See the plot below.
#include <algorithm>
#include <cmath>
#include <iostream>
#include <limits>
#include <random>
#include <vector>
namespace {
static constexpr double kFalseNegativeRate = 0.01;
static constexpr double kCoefficientOfVariation = 0.003;
static constexpr int kSignals = 6000;
static constexpr int kNoiseToSignalRatio = 50;
template <class URNG>
std::vector<double> FakeTimes(URNG &g, const double period) {
std::vector<double> times;
std::bernoulli_distribution false_negative(kFalseNegativeRate);
std::uniform_real_distribution<double> phase(0, period);
double signal = phase(g);
std::normal_distribution<double> interval(period,
kCoefficientOfVariation * period);
std::uniform_real_distribution<double> noise(0, kSignals * period);
for (int i = 0; i < kSignals; i++) {
if (!false_negative(g)) {
times.push_back(signal);
}
signal += interval(g);
for (double j = 0; j < kNoiseToSignalRatio; j++) {
times.push_back(noise(g));
}
}
std::sort(times.begin(), times.end());
return times;
}
constexpr double Square(const double x) { return x * x; }
struct Subsequence {
double score;
int previous;
};
struct Result {
double score = std::numeric_limits<double>::quiet_NaN();
double median_interval = std::numeric_limits<double>::quiet_NaN();
};
Result Score(const std::vector<double> &times, const double period) {
if (times.empty() || !std::is_sorted(times.begin(), times.end())) {
return {};
}
std::vector<Subsequence> bests;
bests.reserve(times.size());
bests.push_back({0, -1});
for (int i = 1; i < times.size(); i++) {
Subsequence best = {std::numeric_limits<double>::infinity(), -1};
for (int j = i - 1; j > -1; j--) {
const double difference = times[i] - times[j];
const double penalty = Square(difference - period);
if (difference >= period && penalty >= best.score) {
break;
}
const Subsequence candidate = {bests[j].score + penalty, j};
if (candidate.score < best.score) {
best = candidate;
}
}
bests.push_back(best);
}
std::vector<double> intervals;
int i = bests.size() - 1;
while (true) {
int previous_i = bests[i].previous;
if (previous_i < 0) {
break;
}
intervals.push_back(times[i] - times[previous_i]);
i = previous_i;
}
if (intervals.empty()) {
return {};
}
const double duration = times.back() - times.front();
// The rate is doubled because we can look for a time in either direction.
const double rate = 2 * (times.size() - 1) / duration;
// Mean of the square of an exponential distribution with the given rate.
const double mean_square = 2 / Square(rate);
const double score = bests.back().score / (intervals.size() * mean_square);
const auto median_interval = intervals.begin() + intervals.size() / 2;
std::nth_element(intervals.begin(), median_interval, intervals.end());
return {score, *median_interval};
}
} // namespace
int main() {
std::default_random_engine g;
const auto times = FakeTimes(g, std::sqrt(2));
for (int i = 0; i < 2000; i++) {
const double period = std::pow(1.001, i) / 3;
const Result result = Score(times, period);
std::cout << period << ' ' << result.score << ' ' << result.median_interval
<< std::endl;
}
}

How to get a random number in Metal shader?

How would I go about getting a random number in a Metal shader?
I searched for "random" in The Metal Shading Language Specification, but found nothing.
It looks like there's not one built in. This example code for MetalShaderShowcase/AAPLWoodShader.metal defines its own simple rand function.
// Generate a random float in the range [0.0f, 1.0f] using x, y, and z (based on the xor128 algorithm)
float rand(int x, int y, int z)
{
int seed = x + y * 57 + z * 241;
seed= (seed<< 13) ^ seed;
return (( 1.0 - ( (seed * (seed * seed * 15731 + 789221) + 1376312589) & 2147483647) / 1073741824.0f) + 1.0f) / 2.0f;
}
So I was working on a Random Number Generator for another project and was wanting to package it into a neat framework for a while.
Your question pushed me to do just that. If you don't mind the shameless plug, here is a very simple framework that will generate a random number for you in a metal shader based on (up to) three seeds that you give it. The code is based on the following research paper that describes how to create random numbers on parallel processors for Monte Carlo simulations. It also has a (theoretical) period of 2^121 so it should be good for most reasonable calculations that can be done on a GPU.
All you have to call in your shader is an intializer, then you call rand(), like so:
// Initialize a random number generator, seeds 2 and 3 are optional
Loki rng = Loki(seed1, seed2, seed3);
// get a random float [0,1)
float random_float = rng.rand();
I also included a sample project in the repo so you can see how it is used.
Instead of computing the random number on the GPU, you can also compute a bunch of random numbers on the CPU and pass them into a the shader using a uniform / MTLBuffer.
Please take a look at [pcg-random], it's very simple and fast, more importantly it's fast. And it's super easy to modify their C code for Metal. https://www.pcg-random.org/
typedef struct { uint64_t state; uint64_t inc; } pcg32_random_t;
void pcg32_srandom_r(thread pcg32_random_t* rng, uint64_t initstate, uint64_t initseq)
{
rng->state = 0U;
rng->inc = (initseq << 1u) | 1u;
pcg32_random_r(rng);
rng->state += initstate;
pcg32_random_r(rng);
}
uint32_t pcg32_random_r(thread pcg32_random_t* rng)
{
uint64_t oldstate = rng->state;
rng->state = oldstate * 6364136223846793005ULL + rng->inc;
uint32_t xorshifted = ((oldstate >> 18u) ^ oldstate) >> 27u;
uint32_t rot = oldstate >> 59u;
return (xorshifted >> rot) | (xorshifted << ((-rot) & 31));
}
How do I use it?
float randomF(thread pcg32_random_t* rng)
{
//return pcg32_random_r(rng)/float(UINT_MAX);
return ldexp(float(pcg32_random_r(rng)), -32);
}
pcg32_random_t rng;
pcg32_srandom_r(&rng, pos_grid.x*int_time, pos_grid.y*int_time);
auto randomFloat = randomF(&rng);

Issues with Arduino Timing

I am working with an Arduino on a project for which timing is very important. I use TimerOne to trigger timer interrupts and use micros() for delays (delayMicroseconds() was causing problems worse than the one explained below). The program is sending a manual PWM signal to an LED and it is very important that the signal is sent with an error that is less than 8 microseconds (ideally, the signal is sent at the same time in each period). My test code is shown below:
#include <TimerOne.h>
#include <SPI.h>
const int LED_PIN = 3;
const int CHIP_SELECT = 12;
const int PERIOD = 4000;
const double DUTY_CYCLE = .5;
const int HIGH_TIME = PERIOD * DUTY_CYCLE;
const int LOW_TIME = PERIOD - HIGH_TIME;
const int INITIAL_SIGNAL_DELAY = LOW_TIME / 2;
const int HIGH_TIME_TOTAL_DELAY = INITIAL_SIGNAL_DELAY + HIGH_TIME;
const int RESISTOR_VALUE = 255;
boolean triggered = false;
boolean data = false;
unsigned long triggeredTime;
unsigned long s;
unsigned long e;
boolean found;
int i = 0;
void setup()
{
s = micros();
Timer1.initialize(PERIOD);
Timer1.attachInterrupt(trigger);
pinMode(LED_PIN, 3);
pinMode(CHIP_SELECT, OUTPUT);
SPI.begin();
digitalWrite(CHIP_SELECT, LOW);
SPI.transfer(B00010001);
SPI.transfer(RESISTOR_VALUE);
digitalWrite(CHIP_SELECT, HIGH);
e = micros();
Serial.begin(115200);
Serial.print("s: ");
Serial.println(s);
Serial.print("e: ");
Serial.println(e);
}
void loop()
{
if(triggered)
{
while(micros() - triggeredTime < INITIAL_SIGNAL_DELAY)
{ }
s = micros();
digitalWrite(LED_PIN, data);
while(micros() - triggeredTime < HIGH_TIME_TOTAL_DELAY)
{ }
digitalWrite(LED_PIN, LOW);
data = !data;
triggered = false;
e = micros();
//micros();
if(s % 100 > 28 || s % 100 < 12)
{
found = true;
}
if(!found)
{
Serial.print("s: ");
Serial.println(s);
}
else
{
Serial.print("ERROR: ");
Serial.println(s);
}
//Serial.print("e: ");
//Serial.println(e);
}
}
void trigger()
{
triggeredTime = micros();
triggered = true;
}
(it should be noted that the first signal sent is always xx20, usually 5020).
So, with this code, I eventually get an error. I am not sure why, but this error occurs at the same point every single time:
.
.
.
s: 1141020
s: 1145020
s: 1149020
ERROR: 1153032
ERROR: 1157020
ERROR: 1161020
.
.
.
Now, the really weird part is if I remove the comments before micros() (the micros() right after e = micros()), there is no error (or at least there is not an error within the first 30 seconds). I was wondering if anybody could provide an explanation for why this happens. I have dedicated many hours trying to get the timing working properly and everything was working well until I encountered this error. Any help would be very much appreciated. Thank you!

C++ - Function is completely skipped if an internal variable exceeds ~60,000

I wrote the following for a class, but came across some strange behavior while testing it. arrayProcedure is meant to do things with an array based on the 2 "tweaks" at the top of the function (arrSize, and start). For the assignment, arrSize must be 10,000, and start, 100. Just for kicks, I decided to see what happens if I increase them, and for some reason, if arrSize exceeds around 60,000 (I haven't found the exact limit), the program immediately crashes with a stack overflow when using a debugger:
Unhandled exception at 0x008F6977 in TMA3Question1.exe: 0xC00000FD: Stack overflow (parameters: 0x00000000, 0x00A32000).
If I just run it without a debugger, I don't get any helpful errors; windows hangs for a fraction of a second, then gives me an error TMA3Question1.exe has stopped working.
I decided to play around with debugging it, but that didn't shed any light. I placed breaks above and below the call to arrayProcedure, as well as peppered inside of it. When arrSize doesn't exceed 60,000 it runs fine: It pauses before calling arrayProcedure, properly waits at all the points inside of it, then pauses on the break underneath the call.
If I raise arrSize however, the break before the call happens, but it appears as though it never even steps into arrayProcedure; it immediately gives me a stack overflow without pausing at any of the internal breakpoints.
The only thing I can think of is the resulting arrays exceeds my computer's current memory, but that doesn't seem likely for a couple reasons:
It should only use just under a megabyte:
sizeof(double) = 8 bytes
8 * 60000 = 480000 bytes per array
480000 * 2 = 960000 bytes for both arrays
As far as I know, arrays aren't immediately constructed when I function is entered; they're allocated on definition. I placed several breakpoints before the arrays are even declared, and they are never reached.
Any light that you could shed on this would be appreciated.
The code:
#include <iostream>
#include <ctime>
//CLOCKS_PER_SEC is a macro supplied by ctime
double msBetween(clock_t startTime, clock_t endTime) {
return endTime - startTime / (CLOCKS_PER_SEC * 1000.0);
}
void initArr(double arr[], int start, int length, int step) {
for (int i = 0, j = start; i < length; i++, j += step) {
arr[i] = j;
}
}
//The function we're going to inline in the next question
void helper(double a1, double a2) {
std::cout << a1 << " * " << a2 << " = " << a1 * a2 << std::endl;
}
void arrayProcedure() {
const int arrSize = 70000;
const int start = 1000000;
std::cout << "Checking..." << std::endl;
if (arrSize > INT_MAX) {
std::cout << "Given arrSize is too high and exceeds the INT_MAX of: " << INT_MAX << std::endl;
return;
}
double arr1[arrSize];
double arr2[arrSize];
initArr(arr1, start, arrSize, 1);
initArr(arr2, arrSize + start - 1, arrSize, -1);
for (int i = 0; i < arrSize; i++) {
helper(arr1[i], arr2[i]);
}
}
int main(int argc, char* argv[]) {
using namespace std;
const clock_t startTime = clock();
arrayProcedure();
clock_t endTime = clock();
cout << endTime << endl;
double elapsedTime = msBetween(startTime, endTime);
cout << "\n\n" << elapsedTime << " milliseconds. ("
<< elapsedTime / 60000 << " minutes)\n";
}
The default stack size is 1 MB with Visual Studio.
https://msdn.microsoft.com/en-us/library/tdkhxaks.aspx
You can increase the stack size or use the new operator.
double *arr1 = new double[arrSize];
double *arr2 = new double[arrSize];
...
delete [] arr1;
delete [] arr2;

Resources