Ode implementation of Maxwell Stefan's equation - ode

I am trying to solve Maxwell Stefan's equation over a membrane to get the transient mole fraction distribution over the membrane thickness 'z'. But somehow I am not able to code it using ODE45, more likely I am not able to write the system to solve using ODE45. It will be really great if someone can help me with the primary syntaxes and function. The equation I am trying to solve is
(dy_i)/dz=1/(cD_(i,j) ) [y_i (N_i+N_j )-N_i ]
where c is concentration and D_(i,j) is binary diffusion coefficient.

Related

Mathematica's NDSolve is finding a stiff system very quickly trying to solve a large set of differential equations

I'm a physics PhD student solving lubrication equations (non-linear PDEs) related to the evaporation of binary droplets in a cylindrical pixel in order to simulate the shape evolution. So I split the system into N ODEs, each representing the height evolution (dh/dt) at point i, write as finite differences, and solve simultaneously using NDSolve. For single-liquid droplets this works perfectly, the droplet evaporates cleanly.
However for binary droplets I include N additional ODEs for the composition fraction evolution (dX/dt) at each point. This also adds a new term (marangoni stress) to the dh/dt equations. Immediately NDSolve says:
At t == 0.00029140763302667777`, step size is effectively zero;
singularity or stiff system suspected.
I plot the droplet height at this t-value and it shows a narrow spike at the origin (clearly exploding, hence the stiffness); plotting the composition shows a narrow plummet at the origin (also exploding, just negative. Also, the physics obviously shouldn't permit the composition fraction to be below 0).
Finally, both sets of equations have terms depending on dX/dr, the composition gradient. If this is set to zero and I also set the evaporation interaction to zero (meaning the two liquids evaporate at the same rate and there can be no X gradient) there should be no change in X anywhere and it should reduce to the single liquid case (dX/dt = 0 and dh/dt no longer depends on X). However, the procedure is introducing some small gradient in X nonetheless, which explodes and causes the same numerical instability.
My question is this: is there anything about NDSolve that might be causing this? I've been over the equations and discretisation a hundred times and am sure it's correct. I've also looked into the documentation of NDSolve, but didn't find anything that helps me. Could it be introducing a small numerical error in the composition gradient?
I can post an MRE code below, but it's pretty dense and obviously written in mathematica code (doesn't transfer to the real world well...) so I don't know how much it'd help. Anyway thank you for reading this!

error bound in function approximation algorithm

Suppose we have the set of floating point number with "m" bit mantissa and "e" bits for exponent. Suppose more over we want to approximate a function "f".
From the theory we know that usually a "range reduced function" is used and then from such function we derive the global function value.
For example let x = (sx,ex,mx) (sign exp and mantissa) then...
log2(x) = ex + log2(1.mx) so basically the range reduced function is "log2(1.mx)".
I have implemented at present reciprocal, square root, log2 and exp2, recently i've started to work with the trigonometric functions. But i was wandering if given a global error bound (ulp error especially) it is possible to derive an error bound for the range reduced function, is there some study about this kind of problem? Speaking of the log2(x) (as example) i would lke to be able to say...
"ok i want log2(x) with k ulp error, to achieve this given our floating point system we need to approximate log2(1.mx) with p ulp error"
Remember that as i said we know we are working with floating point number, but the format is generic, so it could be the classic F32, but even for example e=10, m = 8 end so on.
I can't actually find any reference that shows such kind of study. Reference i have (i.e. muller book) doesn't treat the topic in this way so i was looking for some kind of paper or similar. Do you know any reference?
I'm also trying to derive such bound by myself but it is not easy...
There is a description of current practice, along with a proposed improvement and an error analysis, at https://hal.inria.fr/ensl-00086904/document. The description of current practice appears consistent with the overview at https://docs.oracle.com/cd/E37069_01/html/E39019/z4000ac119729.html, which is consistent with my memory of the most talked about problem being the mod pi range reduction of trigonometric functions.
I think IEEE floating point was a big step forwards just because it standardized things at a time when there were a variety of computer architectures, so lowering the risks of porting code between them, but the accuracy requirements implied by this may have been overkill: for many problems the constraint on the accuracy of the output is the accuracy of the input data, not the accuracy of the calculation of intermediate values.

Inequality solving using Prolog

I am working on solving inequality problems using prolog.I have found a code and it solves ax+b>=0 type equations.
The code I have used is as follows.
:-use_module(library(clpr)).
dec_inc(left,right):-
copy_term(left-right,Copyleft-Copyright).
tell_cs(Copyleft).
max(Copyright,right,Leq).
tell_cs(Leq).
max([],[],[]).
max([E=<_|Ps],[_=<P1|P1s],[K=<P1|Ls]):-
sup(E,K),
max(Ps,P1s,Ls).
tell_cs([]).
tell_cs([C|Cs]):-
{C},
tell_cs(Cs).
for example
when we give {2*X+2>=5}. it gives the correct answer. {X>=1.5}.
2.But if I enter fraction like {(X+3)/(3*X+1)>=1}. it gives {1- (3+X)/ (1+3.0*X)=<0.0}.
How can I solve this type of inequality questions to find the final answer.(questions which include fractions).
Please help me.
If there is any learning material I can refer please let me know.
library(clpr) documentation advises that it deals with non linear constraints only passively, so you're out of luck. You need a more sophisticated algebra system.

Ising 2D Optimization

I have implemented a MC-Simulation of the 2D Ising model in C99.
Compiling with gcc 4.8.2 on Scientific Linux 6.5.
When I scale up the grid the simulation time increases, as expected.
The implementation simply uses the Metropolis–Hastings algorithm.
I tried to find out a way to speed up the algorithm, but I haven't any good idea ?
Are there some tricks to do so ?
As jimifiki wrote, try to do a profiling session.
In order to improve on the algorithmic side only, you could try the following:
Lookup Table:
When calculating the energy difference for the Metropolis criteria you need to evaluate the exponential exp[-K / T * dE ] where K is your scaling constant (in units of Boltzmann's constant) and dE the energy-difference between the original state and the one after a spin-flip.
Calculating exponentials is expensive
So you simply build a table beforehand where to look up the possible values for the dE. There will be (four choose one plus four choose two plus four choose three plus four choose four) possible combinations for a nearest-neightbour interaction, exploit the problem's symmetry and you get five values fordE: 8, 4, 0, -4, -8. Instead of using the exp-function, use the precalculated table.
Parallelization:
As mentioned before, it is possible to parallelize the algorithm. To preserve the physical correctness, you have to use a so-called checkerboard concept. Consider the two-dimensional grid as a checkerboard and compute only the white cells parallel at once, then the black ones. That should be clear, considering the nearest-neightbour interaction which introduces dependencies of the values.
Use GPGPU:
You can also implement the simulation on a GPGPU, e.g. using CUDA, if you're already working on C99.
Some tips:
- Don't forget to align C99-structs properly.
- Use linear Arrays, not that nested ones. Aligned memory is normally faster to access, if done properly.
- Try to let the compiler do loop-unrolling, etc. (gcc special options, not default on O2)
Some more information:
If you look for an efficient method to calculate the critical point of the system, the method of choice would be finite-size scaling where you simulate at different system-sizes and different temperature, then calculate a value which is system-size independet at the critical point, therefore an intersection point of the corresponding curves (please see the theory to get a detailed explaination)
I hope I was helpful.
Cheers...
It's normal that your simulation times scale at least with the square of the size. Isn't it?
Here some subjestions:
If you are concerned with thermalization issues, try to use parallel tempering. It can be of help.
The Metropolis-Hastings algorithm can be made parallel. You could try to do it.
Check you are not pessimizing the code.
Are your spin arrays of ints? You could put many spins on the same int. It's a lot of work.
Moreover, remember what Donald taught us:
premature optimisation is the root of all evil
Before optimising you should first understand where your program is slow. This is called profiling.

LightsOut game solving method "reduced echolean ".Does it always gives correct result?

I am studing the algorithm given here, and
somewhere it is claimed that it is efficent and always give correct result.
But, I try to run the algorithm and it is not giving me correct or efficent output for the following patterns.
For 5 x 5 grid, where (n) is light number and 0/1 state whethere the light is on/off, 1 ON and 0 OFF.
(1)1 (2)0 (3)0 (4)0 (5)0 the output should be 1,7,13,19,25(Pressing this light will make the full grid OFF. But what I am getting is this
(6)0 (7)1 (8)0 (9)0 (10)0 3,5,6,7,8,10,13,16,18,19,20,21,23.
(11)0 (12)0 (13)1 (14)0 (15)0
(16)0 (17)0 (18)0 (19)1 (20)0
(21)0 (22)0 (23)0 (24)0 (25)1
While for some pattern it is giving me correct output as below.
(1)0 (2)0 (3)0 (4)0 (5)1 the output should be 5,9,13,17,21, and the algorithm is giving me correct result.
(6)0 (7)0 (8)0 (9)1 (10)0
(11)0 (12)0 (13)1 (14)0 (15)0
(16)0 (17)1 (18)0 (19)0 (20)0
(21)1 (22)0 (23)0 (24)0 (25)0
If somebody need a code let me know I can post it.
Can please somebody let me know if this methods will always give correct as well as efficient result or not ?
(I'm the author of the code you linked to.) To the best of my knowledge, the code is correct (and I'm sure that the high-level algorithm of using Gaussian elimination over GF(2) is correct). The solution it produces is guaranteed to solve the puzzle, though it's not necessarily the minimal number of button presses. The "efficiency" I was referring to in the writeup refers to the time complexity of solving the puzzle overall (it can solve a Lights Out grid in polynomial time, as opposed to the exponential-time brute-force solution of trying all possible combinations) rather than to the "efficiency" of the generated solution.
I actually don't know any efficient algorithms for finding a solution requiring the minimum number of button presses. Let me know if you find one!
Hope this helps!

Resources