How to solve a set of ODEs in Scilab? - matrix

I have following set of ODEs which I would like to numerically solve in Scilab
I have successfully written the function for evaluating the right hand side of the first equation in other words I am able to solve the first equation
function ut = u(t)
ut = [Vm*cos(2*%pi*fs*t); Vm*sin(2*%pi*fs*t)];
endfunction
function dxdt = SystemModel(t, x, u)
A = [(-RS*(LL + LM)^2 - RR*LM^2)/(LL*LM*(LL + LM)), 0.0, RR/(LL*(LL + LM)), pp*wm/LL;
0.0, (-RS*(LL + LM)^2 - RR*LM^2)/(LL*LM*(LL + LM)), -pp*wm/LL, RR/(LL*(LL + LM));
(LM*RR)/(LL + LM), 0.0, -RR/(LL + LM), -pp*wm;
0.0, (LM*RR)/(LL + LM), pp*wm, -RR/(LL + LM)];
B = [(LL + LM)/(LL*LM), 0.0; 0.0, (LL + LM)/(LL*LM); 0.0, 0.0; 0.0, 0.0];
dxdt = A*x + B*u(t);
endfunction
My problem is that I don't know how to write similar function for evaluation of the right hand side of the second equation because it depends on solution of the first equation. Can anybody give me an advice how to do that?
Possible solution:
x0 = zeros(4, 1);
xtilde0 = zeros(4, 1);
X0 = [x0; xtilde0];
t0 = 0;
dt = 0.001;
t = 0:dt:1;
function ut = u(t)
ut = [Vm*cos(2*%pi*fs*t); Vm*sin(2*%pi*fs*t)];
endfunction
function dXdt = RightHandSide(t, X, u)
x = X(1:4);
xtilde = X(5:8);
// dx/dt = A*x + B*u
A = [(-RS*(LL + LM)^2 - RR*LM^2)/(LL*LM*(LL + LM)), 0.0, RR/(LL*(LL + LM)), pp*wm/LL;
0.0, (-RS*(LL + LM)^2 - RR*LM^2)/(LL*LM*(LL + LM)), -pp*wm/LL, RR/(LL*(LL + LM));
(LM*RR)/(LL + LM), 0.0, -RR/(LL + LM), -pp*wm;
0.0, (LM*RR)/(LL + LM), pp*wm, -RR/(LL + LM)];
B = [(LL + LM)/(LL*LM), 0.0; 0.0, (LL + LM)/(LL*LM); 0.0, 0.0; 0.0, 0.0];
// dxtilde/dt = (An - L*Cn)*xtilde + (dA - L*dC)*x + dB*u
An = [(-RSn*(LLn + LMn)^2 - RRn*LMn^2)/(LLn*LMn*(LLn + LMn)), 0.0, RRn/(LLn*(LLn + LMn)), pp*wm/LLn;
0.0, (-RSn*(LLn + LMn)^2 - RRn*LMn^2)/(LLn*LMn*(LLn + LMn)), -pp*wm/LLn, RRn/(LLn*(LLn + LMn));
(LMn*RRn)/(LLn + LMn), 0.0, -RRn/(LLn + LMn), -pp*wm;
0.0, (LMn*RRn)/(LLn + LMn), pp*wm, -RRn/(LLn + LMn)];
K = 1.5;
l1 = (K - 1.0)*((RSn*(LLn + LMn)^2 + RRn*LMn^2)/(LLn*LMn*(LLn + LMn)) + RRn/(LLn + LMn));
l2 = (K - 1.0)*pp*wm;
l3 = (K^2 - 1.0)*((RSn*(LLn + LMn)^2 + RRn*LMn^2)/(LMn*(LLn + LMn)) - (LMn*RRn)/(LLn + LMn)) - (K - 1)*((RSn*(LLn + LMn)^2 + RRn*LMn^2)/(LMn*(LLn + LMn)) + (LLn*RRn)/(LLn + LMn));
l4 = -(K - 1.0)*LLn*wm*pp;
L = [l1, l2;
-l2, l1;
l3, l4;
-l4, l3];
Bn = [(LLn + LMn)/(LLn*LMn), 0.0; 0.0, (LLn + LMn)/(LLn*LMn); 0.0, 0.0; 0.0, 0.0];
Cn = [1.0, 0.0, 0.0, 0.0; 0.0, 1.0, 0.0, 0.0];
A = [(-RS*(LL + LM)^2 - RR*LM^2)/(LL*LM*(LL + LM)), 0.0, RR/(LL*(LL + LM)), pp*wm/LL;
0.0, (-RS*(LL + LM)^2 - RR*LM^2)/(LL*LM*(LL + LM)), -pp*wm/LL, RR/(LL*(LL + LM));
(LM*RR)/(LL + LM), 0.0, -RR/(LL + LM), -pp*wm;
0.0, (LM*RR)/(LL + LM), pp*wm, -RR/(LL + LM)];
B = [(LL + LM)/(LL*LM), 0.0; 0.0, (LL + LM)/(LL*LM); 0.0, 0.0; 0.0, 0.0];
C = [1.0, 0.0, 0.0, 0.0; 0.0, 1.0, 0.0, 0.0];
dA = An - A;
dB = Bn - B;
dC = Cn - C;
dxdt = A*x + B*u(t);
dxtildedt = (An - L*Cn)*xtilde + (dA - L*dC)*x + dB*u(t);
dXdt = [dxdt; dxtildedt];
endfunction
X = ode(X0, t0, t, list(RightHandSide, u));

Let y = x_tilde. Let assume that it is a 3x1 vector (we can't guess its size with your current presentation).
Build the column X = [x1 x2 x3 x4 y1 y2 y3].' (big X)
Express the column (dX/dt) according to X coordinates and t
Convert the system built in 2) into a Scilab function X_dot = Xder(t, X)
Build the initial state vector Xinit = [x1(t_init); x2(t_init); .. y3(t_init)]
Define the vector t of times to which you want the values of X. They all likely have to be ≥ t_init, and must be strictly increasing.
call X = ode(Xinit, t_init, t, Xder)
X(:,i) should then be the values of X components for each t(i) date.
You can "back-split" big X into x = X(1:4,:) and x_tilde = X(5:$,:).

Related

How to create a text made of glass in canvas with refraction and reflection?

What I'd like to achieve is close to this there. You can also just take a look at those screenshots.
The actual result
Notice how the refraction is evolving as the page scrolls down/up. Scrolling, there is also a source of light going right to left.
After scrolling
Ideally I'd like the text to have that transparent glass reflective aspect like on the example provided. But also, to refract what is behind, which does not seem to be the case here. Indeed, when the canvas is left alone, the refraction still happens, so i suspect the effects is done knowing what would be displayed in the background. As for me, I'd like to refract whats behind dynamically. Yet again i'm thinking that i might have been achieved this way for a reason, maybe performance issue
All non canvas elements removed
Indeed, it looks like it it based from the background, but the background is not within the canvas. Also, as you can see, on the next picture, the refraction effect is still hapenning even though the background is removed.
Refraction
The source of light is still there and i suspect it's using some kind of ray casting/ray tracing method. I'm not at all familiar with drawing in the canvas (except using p5.js for simple things),and it took me a long time to find ray tracing with no idea of what i'm looking for.
.... Questions ....
How do i get the glass transparent reflective aspect on the text ? Should it be achieve with graphic design tools ? (I don't know how to get an object (see screenshot below) that seem to have the texture bind afterwards.I'm not even sure if i'm using the right vocabulary but assuming I am, I don't know how to make such texture.)
text object no "texture"
How to refract everything that would be placed behind the glass object? (Before I came to the conclusion that I needed to use canvas, not just because I found this exemple, but also because of other considerations related to the project I'm working on. I've invest a lot of time learning suffisant svg to achieve what you can see on the next screenshot,and failed to achieve what was aimed. I'm not willing to do so the same with ray casting thus my third question. I hope it's understandable...Still the refracted part is there but looks a lot less realistic than in the provided example.)
SVG
Is ray casting/ray tracing is the right path to dig in for achieving the refraction ? Will it be okay to use if its ray tracing every objects behind.
Thanks for your time and concern.
Reflection and Refraction
There are so many tutorials online to achieve this FX I can not see the point in repeating them.
This answer presents an approximation using a normal map in place of a 3D model, and flat texture maps to represent the reflection and refraction maps, rather than 3D textures traditionally used to get reflections and refraction.
Generating a normal map.
The snippet below generates a normal map from input text with various options. The process is reasonably quick (not real time) and will be the stand in for a 3D model in the webGL rendering solution.
It first creates a height map of the text, adds some smoothing, then converts the map to a normal map.
text.addEventListener("keyup", createNormalMap)
createNormalMap();
function createNormalMap(){
text.focus();
setTimeout(() => {
const can = normalMapText(text.value, "Arial Black", 96, 8, 2, 0.1, true, "round");
result.innerHTML = "";
result.appendChild(can);
}, 0);
}
function normalMapText(text, font, size, bevel, smooth = 0, curve = 0.5, smoothNormals = true, corners = "round") {
const canvas = document.createElement("canvas");
const mask = document.createElement("canvas");
const ctx = canvas.getContext("2d");
const ctxMask = mask.getContext("2d");
ctx.font = size + "px " + font;
const tw = ctx.measureText(text).width;
const cx = (mask.width = canvas.width = tw + bevel * 3) / 2;
const cy = (mask.height = canvas.height = size + bevel * 3) / 2;
ctx.font = size + "px " + font;
ctx.textAlign = "center";
ctx.textBaseline = "middle";
ctx.lineJoin = corners;
const step = 255 / (bevel + 1);
var j, i = 0, val = step;
while (i < bevel) {
ctx.lineWidth = bevel - i;
const v = ((val / 255) ** curve) * 255;
ctx.strokeStyle = `rgb(${v},${v},${v})`;
ctx.strokeText(text, cx, cy);
i++;
val += step;
}
ctx.fillStyle = "#FFF";
ctx.fillText(text, cx, cy);
if (smooth >= 1) {
ctxMask.drawImage(canvas, 0, 0);
ctx.filter = "blur(" + smooth + "px)";
ctx.drawImage(mask, 0, 0);
ctx.globalCompositeOperation = "destination-in";
ctx.filter = "none";
ctx.drawImage(mask, 0, 0);
ctx.globalCompositeOperation = "source-over";
}
const w = canvas.width, h = canvas.height, w4 = w << 2;
const imgData = ctx.getImageData(0,0,w,h);
const d = imgData.data;
const heightBuf = new Uint8Array(w * h);
j = i = 0;
while (i < d.length) {
heightBuf[j++] = d[i]
i += 4;
}
var x, y, xx, yy, zz, xx1, yy1, zz1, xx2, yy2, zz2, dist;
i = 0;
for(y = 0; y < h; y ++){
for(x = 0; x < w; x ++){
if(d[i + 3]) { // only pixels with alpha > 0
const idx = x + y * w;
const x1 = 1;
const z1 = heightBuf[idx - 1] === undefined ? 0 : heightBuf[idx - 1] - heightBuf[idx];
const y1 = 0;
const x2 = 0;
const z2 = heightBuf[idx - w] === undefined ? 0 : heightBuf[idx - w] - heightBuf[idx];
const y2 = -1;
const x3 = 1;
const z3 = heightBuf[idx - w - 1] === undefined ? 0 : heightBuf[idx - w - 1] - heightBuf[idx];
const y3 = -1;
xx = y3 * z2 - z3 * y2
yy = z3 * x2 - x3 * z2
zz = x3 * y2 - y3 * x2
dist = (xx * xx + yy * yy + zz * zz) ** 0.5;
xx /= dist;
yy /= dist;
zz /= dist;
xx1 = y1 * z3 - z1 * y3
yy1 = z1 * x3 - x1 * z3
zz1 = x1 * y3 - y1 * x3
dist = (xx1 * xx1 + yy1 * yy1 + zz1 * zz1) ** 0.5;
xx += xx1 / dist;
yy += yy1 / dist;
zz += zz1 / dist;
if (smoothNormals) {
const x1 = 2;
const z1 = heightBuf[idx - 2] === undefined ? 0 : heightBuf[idx - 2] - heightBuf[idx];
const y1 = 0;
const x2 = 0;
const z2 = heightBuf[idx - w * 2] === undefined ? 0 : heightBuf[idx - w * 2] - heightBuf[idx];
const y2 = -2;
const x3 = 2;
const z3 = heightBuf[idx - w * 2 - 2] === undefined ? 0 : heightBuf[idx - w * 2 - 2] - heightBuf[idx];
const y3 = -2;
xx2 = y3 * z2 - z3 * y2
yy2 = z3 * x2 - x3 * z2
zz2 = x3 * y2 - y3 * x2
dist = (xx2 * xx2 + yy2 * yy2 + zz2 * zz2) ** 0.5 * 2;
xx2 /= dist;
yy2 /= dist;
zz2 /= dist;
xx1 = y1 * z3 - z1 * y3
yy1 = z1 * x3 - x1 * z3
zz1 = x1 * y3 - y1 * x3
dist = (xx1 * xx1 + yy1 * yy1 + zz1 * zz1) ** 0.5 * 2;
xx2 += xx1 / dist;
yy2 += yy1 / dist;
zz2 += zz1 / dist;
xx += xx2;
yy += yy2;
zz += zz2;
}
dist = (xx * xx + yy * yy + zz * zz) ** 0.5;
d[i+0] = ((xx / dist) + 1.0) * 128;
d[i+1] = ((yy / dist) + 1.0) * 128;
d[i+2] = 255 - ((zz / dist) + 1.0) * 128;
}
i += 4;
}
}
ctx.putImageData(imgData, 0, 0);
return canvas;
}
<input id="text" type="text" value="Normal Map" />
<div id="result"></div>
Approximation
To render the text we need to create some shaders. As we are using a normal map the vertex shader can be very simple.
Vertex shader
We are using a quad to render the whole canvas. The vertex shader outputs the 4 corners and converts each corner to a texture coordinate.
#version 300 es
in vec2 vert;
out vec2 texCoord;
void main() {
texCoord = vert * 0.5 + 0.5;
gl_Position = vec4(verts, 1, 1);
}
Fragment shader
The fragment shader has 3 texture inputs. The normal map, and the reflection and refraction maps.
The fragment shader first works out if the pixel is part of the background, or on the text. If on the text it converts the RGB texture normal into a vector normal.
It then uses vector addition to get the reflected and refracted textures. Mixing those textures by the normal maps z value. In effect refraction is strongest when the normal is facing up and reflection strongest when normal facing away
#version 300 es
uniform sampler2D normalMap;
uniform sampler2D refractionMap;
uniform sampler2D reflectionMap;
in vec2 texCoord;
out vec4 pixel;
void main() {
vec4 norm = texture(normalMap, texCoord);
if (norm.a > 0) {
vec3 normal = normalize(norm.rgb - 0.5);
vec2 tx1 = textCoord + normal.xy * 0.1;
vec2 tx2 = textCoord - normal.xy * 0.2;
pixel = vec4(mix(texture(refractionMap, tx2).rgb, texture(reflectionMap, tx1).rgb, abs(normal.z)), norm.a);
} else {
pixel = texture(refactionMap, texCoord);
}
}
That is the most basic form that will give the impression of reflection and refraction.
Example NOT REAL reflection refraction.
The example is a little more complex as the various textures have different sizes and thus need to be scaled in the fragment shader to be the correct size.
I have also added some tinting to both the refraction and reflections and mixed the reflection via a curve.
The background is scrolled to the mouse position. To match a background on the page you would move the canvas over the background.
There are a few #defines in the frag shader to control the settings. You could make them uniforms, or constants.
mixCurve controls the mix of reflect refract textures. Values < 1 > 0 ease out refraction, values > 1 ease out the reflection.
The normal map is one to one with rendered pixels. As 2D canvas rendering is rather poor quality you can get a better result by over sampling the normal map in the fragment shader.
const vertSrc = `#version 300 es
in vec2 verts;
out vec2 texCoord;
void main() {
texCoord = verts * vec2(0.5, -0.5) + 0.5;
gl_Position = vec4(verts, 1, 1);
}
`
const fragSrc = `#version 300 es
precision highp float;
#define refractStrength 0.1
#define reflectStrength 0.2
#define refractTint vec3(1,0.95,0.85)
#define reflectTint vec3(1,1.25,1.42)
#define mixCurve 0.3
uniform sampler2D normalMap;
uniform sampler2D refractionMap;
uniform sampler2D reflectionMap;
uniform vec2 scrolls;
in vec2 texCoord;
out vec4 pixel;
void main() {
vec2 nSize = vec2(textureSize(normalMap, 0));
vec2 scaleCoords = nSize / vec2(textureSize(refractionMap, 0));
vec2 rCoord = (texCoord - scrolls) * scaleCoords;
vec4 norm = texture(normalMap, texCoord);
if (norm.a > 0.99) {
vec3 normal = normalize(norm.rgb - 0.5);
vec2 tx1 = rCoord + normal.xy * scaleCoords * refractStrength;
vec2 tx2 = rCoord - normal.xy * scaleCoords * reflectStrength;
vec3 c1 = texture(refractionMap, tx1).rgb * refractTint;
vec3 c2 = texture(reflectionMap, tx2).rgb * reflectTint;
pixel = vec4(mix(c2, c1, abs(pow(normal.z,mixCurve))), 1.0);
} else {
pixel = texture(refractionMap, rCoord);
}
}
`
var program, loc;
function normalMapText(text, font, size, bevel, smooth = 0, curve = 0.5, smoothNormals = true, corners = "round") {
const canvas = document.createElement("canvas");
const mask = document.createElement("canvas");
const ctx = canvas.getContext("2d");
const ctxMask = mask.getContext("2d");
ctx.font = size + "px " + font;
const tw = ctx.measureText(text).width;
const cx = (mask.width = canvas.width = tw + bevel * 3) / 2;
const cy = (mask.height = canvas.height = size + bevel * 3) / 2;
ctx.font = size + "px " + font;
ctx.textAlign = "center";
ctx.textBaseline = "middle";
ctx.lineJoin = corners;
const step = 255 / (bevel + 1);
var j, i = 0, val = step;
while (i < bevel) {
ctx.lineWidth = bevel - i;
const v = ((val / 255) ** curve) * 255;
ctx.strokeStyle = `rgb(${v},${v},${v})`;
ctx.strokeText(text, cx, cy);
i++;
val += step;
}
ctx.fillStyle = "#FFF";
ctx.fillText(text, cx, cy);
if (smooth >= 1) {
ctxMask.drawImage(canvas, 0, 0);
ctx.filter = "blur(" + smooth + "px)";
ctx.drawImage(mask, 0, 0);
ctx.globalCompositeOperation = "destination-in";
ctx.filter = "none";
ctx.drawImage(mask, 0, 0);
ctx.globalCompositeOperation = "source-over";
}
const w = canvas.width, h = canvas.height, w4 = w << 2;
const imgData = ctx.getImageData(0,0,w,h);
const d = imgData.data;
const heightBuf = new Uint8Array(w * h);
j = i = 0;
while (i < d.length) {
heightBuf[j++] = d[i]
i += 4;
}
var x, y, xx, yy, zz, xx1, yy1, zz1, xx2, yy2, zz2, dist;
i = 0;
for(y = 0; y < h; y ++){
for(x = 0; x < w; x ++){
if(d[i + 3]) { // only pixels with alpha > 0
const idx = x + y * w;
const x1 = 1;
const z1 = heightBuf[idx - 1] === undefined ? 0 : heightBuf[idx - 1] - heightBuf[idx];
const y1 = 0;
const x2 = 0;
const z2 = heightBuf[idx - w] === undefined ? 0 : heightBuf[idx - w] - heightBuf[idx];
const y2 = -1;
const x3 = 1;
const z3 = heightBuf[idx - w - 1] === undefined ? 0 : heightBuf[idx - w - 1] - heightBuf[idx];
const y3 = -1;
xx = y3 * z2 - z3 * y2
yy = z3 * x2 - x3 * z2
zz = x3 * y2 - y3 * x2
dist = (xx * xx + yy * yy + zz * zz) ** 0.5;
xx /= dist;
yy /= dist;
zz /= dist;
xx1 = y1 * z3 - z1 * y3
yy1 = z1 * x3 - x1 * z3
zz1 = x1 * y3 - y1 * x3
dist = (xx1 * xx1 + yy1 * yy1 + zz1 * zz1) ** 0.5;
xx += xx1 / dist;
yy += yy1 / dist;
zz += zz1 / dist;
if (smoothNormals) {
const x1 = 2;
const z1 = heightBuf[idx - 2] === undefined ? 0 : heightBuf[idx - 2] - heightBuf[idx];
const y1 = 0;
const x2 = 0;
const z2 = heightBuf[idx - w * 2] === undefined ? 0 : heightBuf[idx - w * 2] - heightBuf[idx];
const y2 = -2;
const x3 = 2;
const z3 = heightBuf[idx - w * 2 - 2] === undefined ? 0 : heightBuf[idx - w * 2 - 2] - heightBuf[idx];
const y3 = -2;
xx2 = y3 * z2 - z3 * y2
yy2 = z3 * x2 - x3 * z2
zz2 = x3 * y2 - y3 * x2
dist = (xx2 * xx2 + yy2 * yy2 + zz2 * zz2) ** 0.5 * 2;
xx2 /= dist;
yy2 /= dist;
zz2 /= dist;
xx1 = y1 * z3 - z1 * y3
yy1 = z1 * x3 - x1 * z3
zz1 = x1 * y3 - y1 * x3
dist = (xx1 * xx1 + yy1 * yy1 + zz1 * zz1) ** 0.5 * 2;
xx2 += xx1 / dist;
yy2 += yy1 / dist;
zz2 += zz1 / dist;
xx += xx2;
yy += yy2;
zz += zz2;
}
dist = (xx * xx + yy * yy + zz * zz) ** 0.5;
d[i+0] = ((xx / dist) + 1.0) * 128;
d[i+1] = ((yy / dist) + 1.0) * 128;
d[i+2] = 255 - ((zz / dist) + 1.0) * 128;
}
i += 4;
}
}
ctx.putImageData(imgData, 0, 0);
return canvas;
}
function createChecker(size, width, height) {
const canvas = document.createElement("canvas");
const ctx = canvas.getContext("2d");
canvas.width = width * size;
canvas.height = height * size;
for(var y = 0; y < size; y ++) {
for(var x = 0; x < size; x ++) {
const xx = x * width;
const yy = y * height;
ctx.fillStyle ="#888";
ctx.fillRect(xx,yy,width,height);
ctx.fillStyle ="#DDD";
ctx.fillRect(xx,yy,width/2,height/2);
ctx.fillRect(xx+width/2,yy+height/2,width/2,height/2);
}
}
return canvas;
}
const mouse = {x:0, y:0};
addEventListener("mousemove",e => {mouse.x = e.pageX; mouse.y = e.pageY });
var normMap = normalMapText("GLASSY", "Arial Black", 128, 24, 1, 0.1, true, "round");
canvas.width = normMap.width;
canvas.height = normMap.height;
const locations = {updates: []};
const fArr = arr => new Float32Array(arr);
const gl = canvas.getContext("webgl2", {premultipliedAlpha: false, antialias: false, alpha: false});
const textures = {};
setup();
function texture(gl, image, {min = "LINEAR", mag = "LINEAR", wrapX = "REPEAT", wrapY = "REPEAT"} = {}) {
const texture = gl.createTexture();
target = gl.TEXTURE_2D;
gl.bindTexture(target, texture);
gl.texParameteri(target, gl.TEXTURE_MIN_FILTER, gl[min]);
gl.texParameteri(target, gl.TEXTURE_MAG_FILTER, gl[mag]);
gl.texParameteri(target, gl.TEXTURE_WRAP_S, gl[wrapX]);
gl.texParameteri(target, gl.TEXTURE_WRAP_T, gl[wrapY]);
gl.texImage2D(target, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
return texture;
}
function bindTexture(texture, unit) {
gl.activeTexture(gl.TEXTURE0 + unit);
gl.bindTexture(gl.TEXTURE_2D, texture);
}
function Location(name, data, type = "fv", autoUpdate = true) {
const glUpdateCall = gl["uniform" + data.length + type].bind(gl);
const loc = gl.getUniformLocation(program, name);
locations[name] = {data, update() {glUpdateCall(loc, data)}};
autoUpdate && locations.updates.push(locations[name]);
return locations[name];
}
function compileShader(src, type, shader = gl.createShader(type)) {
gl.shaderSource(shader, src);
gl.compileShader(shader);
return shader;
}
function setup() {
program = gl.createProgram();
gl.attachShader(program, compileShader(vertSrc, gl.VERTEX_SHADER));
gl.attachShader(program, compileShader(fragSrc, gl.FRAGMENT_SHADER));
gl.linkProgram(program);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, gl.createBuffer());
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint8Array([0,1,2,0,2,3]), gl.STATIC_DRAW);
gl.bindBuffer(gl.ARRAY_BUFFER, gl.createBuffer());
gl.bufferData(gl.ARRAY_BUFFER, fArr([-1,-1,1,-1,1,1,-1,1]), gl.STATIC_DRAW);
gl.enableVertexAttribArray(loc = gl.getAttribLocation(program, "verts"));
gl.vertexAttribPointer(loc, 2, gl.FLOAT, false, 0, 0);
gl.useProgram(program);
Location("scrolls", [0, 0]);
Location("normalMap", [0], "i", false).update();
Location("refractionMap", [1], "i", false).update();
Location("reflectionMap", [2], "i", false).update();
textures.norm = texture(gl,normMap);
textures.reflect = texture(gl,createChecker(8,128,128));
textures.refract = texture(gl,createChecker(8,128,128));
gl.viewport(0, 0, normMap.width, normMap.height);
bindTexture(textures.norm, 0);
bindTexture(textures.reflect, 1);
bindTexture(textures.refract, 2);
loop();
}
function draw() {
for(const l of locations.updates) { l.update() }
gl.drawElements(gl.TRIANGLES, 6, gl.UNSIGNED_BYTE, 0);
}
function loop() {
locations.scrolls.data[0] = -1 + mouse.x / canvas.width;
locations.scrolls.data[1] = -1 + mouse.y / canvas.height;
draw();
requestAnimationFrame(loop);
}
canvas {
position: absolute;
top: 0px;
left: 0px;
}
<canvas id="canvas"></canvas>
Personally I find this FX more visually pleasing than simulations based on real lighting models. Though keep in mind THIS IS NOT Refraction or Reflections.

Why am I seeing these blending artifacts in my GLSL shader?

I'm attempting to create a shader that additively blends colored "blobs" (kind of like particles) on top of one another. This seems like it should be a straightforward task but I'm getting strange "banding"-like artifacts when the blobs blend.
First off, here's the behavior I'm after (replicated using Photoshop layers):
Note that the three color layers are all set to blendmode "Linear Dodge (Add)" which as far as I understand is Photoshop's "additive" blend mode.
If I merge the color layers and leave the resulting layer set to "Normal" blending, I'm then free to change the background color as I please.
Obviously additive blending will not work on top of a non-black background, so in the end I will also want/need the shader to support this pre-merging of colors before finally blending into a background that could have any color. However, I'm content for now to only focus on getting the additive-on-top-of-black blending working correctly, because it's not.
Here's my shader code in its current state.
const int MAX_SHAPES = 10;
vec2 spread = vec2(0.3, 0.3);
vec2 offset = vec2(0.0, 0.0);
float shapeSize = 0.3;
const float s = 1.0;
float shapeColors[MAX_SHAPES * 3] = float[MAX_SHAPES * 3] (
s, 0.0, 0.0,
0.0, s, 0.0,
0.0, 0.0, s,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0
);
vec2 motionFunction (float i) {
float t = iTime;
return vec2(
(cos(t * 0.31 + i * 3.0) + cos(t * 0.11 + i * 14.0) + cos(t * 0.78 + i * 30.0) + cos(t * 0.55 + i * 10.0)) / 4.0,
(cos(t * 0.13 + i * 33.0) + cos(t * 0.66 + i * 38.0) + cos(t * 0.42 + i * 83.0) + cos(t * 0.9 + i * 29.0)) / 4.0
);
}
float blend (float src, float dst, float alpha) {
return alpha * src + (1.0 - alpha) * dst;
}
void mainImage (out vec4 fragColor, in vec2 fragCoord) {
float aspect = iResolution.x / iResolution.y;
float x = (fragCoord.x / iResolution.x) - 0.5;
float y = (fragCoord.y / iResolution.y) - 0.5;
vec2 pixel = vec2(x, y / aspect);
vec4 totalColor = vec4(0.0, 0.0, 0.0, 0.0);
for (int i = 0; i < MAX_SHAPES; i++) {
if (i >= 3) {
break;
}
vec2 shapeCenter = motionFunction(float(i));
shapeCenter *= spread;
shapeCenter += offset;
float dx = shapeCenter.x - pixel.x;
float dy = shapeCenter.y - pixel.y;
float d = sqrt(dx * dx + dy * dy);
float ratio = d / shapeSize;
float intensity = 1.0 - clamp(ratio, 0.0, 1.0);
totalColor.x = totalColor.x + shapeColors[i * 3 + 0] * intensity;
totalColor.y = totalColor.y + shapeColors[i * 3 + 1] * intensity;
totalColor.z = totalColor.z + shapeColors[i * 3 + 2] * intensity;
totalColor.w = totalColor.w + intensity;
}
float alpha = clamp(totalColor.w, 0.0, 1.0);
float background = 0.0;
fragColor = vec4(
blend(totalColor.x, background, alpha),
blend(totalColor.y, background, alpha),
blend(totalColor.z, background, alpha),
1.0
);
}
And here's a ShaderToy version where you can view it live — https://www.shadertoy.com/view/wlf3RM
Or as a video — https://streamable.com/un25t
The visual artifacts should be pretty obvious, but here's a video that points them out: https://streamable.com/kxaps
(I think they are way more prevalent in the video linked before this one, though. The motion really make them pop out.)
Also as a static image for comparison:
Basically, there are "edges" that appear on certain magical thresholds. I have no idea how they got there or how to get rid of them. Your help would be highly appreciated.
The inside lines are where totalColor.w reaches 1 and so alpha is clamped to 1 inside them. The outside ones that you've traced in white are the edges of the circles.
I modified your ShaderToy link by changing float alpha = clamp(totalColor.w, 0.0, 1.0); to float alpha = 1.0; and float intensity = 1.0 - clamp(ratio, 0.0, 1.0); to float intensity = smoothstep(1.0, 0.0, ratio); (to smooth out the edges of the circles) and now it looks like the first picture.

Parallel Computing with Julia #parallel and SharedArray

I have been trying to implement some parallel programming in Julia using #parallel and SharedArrays.
Xi = Array{Float64}([0.0, 450.0, 450.0, 0.0, 0.0, 450.0, 450.0, 0.0])
Yi = Array{Float64}([0.0, 0.0, 600.0, 600.0, 0.0, 0.0, 600.0, 600.0])
Zi = Array{Float64}([0.0, 0.0, 0.0, 0.0, 400.0, 400.0, 400.0, 400.0])
Xj = Array{Float64}([0.0, 450.0, 450.0, 0.0, 0.0, 450.0, 450.0, 0.0])
Yj = Array{Float64}([0.0, 0.0, 600.0, 600.0, 0.0, 0.0, 600.0, 600.0])
Zj = Array{Float64}([0.0, 0.0, 0.0, 0.0, 400.0, 400.0, 400.0, 400.0])
L = Array{Float64}([400.0, 400.0, 400.0, 400.0, 450.0, 600.0, 450.0, 600.0])
Rot = Array{Float64}([90.0, 90.0, 90.0, 90.0, 0.0, 0.0, 0.0, 0.0])
Obviously these vectors will be huge, but for simplicity I just put this limited size.
This is the operation without parallel computing:
function jt_transcoord(Xi, Yi, Zi, Xj, Yj, Zj, Rot, L)
r = Vector(length(Xi))
for i in 1:length(Xi)
rxX = (Xj[i] - Xi[i]) / L[i]
rxY = (Yj[i] - Yi[i]) / L[i]
rxZ = (Zj[i] - Zi[i]) / L[i]
if rxX == 0 && rxY == 0
r[i] = [0 0 rxZ; cosd(Rot[i]) -rxZ*sind(Rot[i]) 0; sind(Rot[i]) rxZ*cosd(Rot[i]) 0]
else
R=sqrt(rxX^2+rxY^2)
r21=(-rxX*rxZ*cosd(Rot[i])+rxY*sind(Rot[i]))/R
r22=(-rxY*rxZ*cosd(Rot[i])-rxX*sind(Rot[i]))/R
r23=R*cosd(Rot[i])
r31=(rxX*rxZ*sind(Rot[i])+rxY*cosd(Rot[i]))/R
r32=(rxY*rxZ*sind(Rot[i])-rxX*cosd(Rot[i]))/R
r33=-R*sind(Rot[i])
r[i] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]
end
end
return r
end
The returned value is basically an array that contains a matrix in each vector row. That looks something like this:
r =
[[0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0],
[0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0],
[0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0],
[0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0],
[1.0 0.0 0.0; 0.0 -0.0 1.0; 0.0 -1.0 -0.0],
[0.0 1.0 0.0; 0.0 -0.0 1.0; 1.0 0.0 -0.0],
[-1.0 0.0 0.0; 0.0 0.0 1.0; 0.0 1.0 -0.0],
[0.0 -1.0 0.0; -0.0 0.0 1.0; -1.0 -0.0 -0.0]]
This is my function using #parallel. First of all I need to convert the vectors to SharedArrays:
Xi = convert(SharedArray, Xi)
Yi = convert(SharedArray, Yi)
Zi = convert(SharedArray, Zi)
Xj = convert(SharedArray, Xj)
Yj = convert(SharedArray, Yj)
Zj = convert(SharedArray, Zj)
L = convert(SharedArray, L)
Rot = convert(SharedArray, Rot)
This the same code but using #parallel
function jt_transcoord_parallel(Xi, Yi, Zi, Xj, Yj, Zj, Rot, L)
r = SharedArray{Float64}(zeros((length(Xi),1)))
#parallel for i in 1:length(Xi)
rxX = (Xj[i] - Xi[i]) / L[i]
rxY = (Yj[i] - Yi[i]) / L[i]
rxZ = (Zj[i] - Zi[i]) / L[i]
if rxX == 0 && rxY == 0
r[i] = [0 0 rxZ; cosd(Rot[i]) -rxZ*sind(Rot[i]) 0; sind(Rot[i]) rxZ*cosd(Rot[i]) 0]
else
R=sqrt(rxX^2+rxY^2)
r21=(-rxX*rxZ*cosd(Rot[i])+rxY*sind(Rot[i]))/R
r22=(-rxY*rxZ*cosd(Rot[i])-rxX*sind(Rot[i]))/R
r23=R*cosd(Rot[i])
r31=(rxX*rxZ*sind(Rot[i])+rxY*cosd(Rot[i]))/R
r32=(rxY*rxZ*sind(Rot[i])-rxX*cosd(Rot[i]))/R
r33=-R*sind(Rot[i])
r[i] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]
end
end
return r
end
I just got a vector of zeros. My question is: Is there a way to implement this function using #parallel in Julia and get the same results that I got in my original function?
The functions jt_transcoord and jt_transcoord_parallel have major coding flaws.
In jt_transcoord, you are assigning an array to a vector element position. For example, you write r = Vector(length(Xi)) and then assign r[i] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]. But r[i] should be a number, and you instead assign it a 3x3 matrix. I suspect that Julia is quietly changing types for you.
SharedArray objects will not admit this lax type conversion behavior. The components of a SharedArray must be of a single primitive type such as Float64, and Vector{Matrix} is not a primitive type. Open a Julia v0.6 REPL and copy/paste the following code:
r = SharedArray{Float64}(length(Xi))
for i in 1:length(Xi)
rxX = (Xj[i] - Xi[i]) / L[i]
rxY = (Yj[i] - Yi[i]) / L[i]
rxZ = (Zj[i] - Zi[i]) / L[i]
if rxX == 0 && rxY == 0
r[i] = [0 0 rxZ; cosd(Rot[i]) -rxZ*sind(Rot[i]) 0; sind(Rot[i]) rxZ*cosd(Rot[i]) 0]
else
R = sqrt(rxX^2+rxY^2)
r21 = (-rxX*rxZ*cosd(Rot[i])+rxY*sind(Rot[i]))/R
r22 = (-rxY*rxZ*cosd(Rot[i])-rxX*sind(Rot[i]))/R
r23 = R*cosd(Rot[i])
r31 = (rxX*rxZ*sind(Rot[i])+rxY*cosd(Rot[i]))/R
r32 = (rxY*rxZ*sind(Rot[i])-rxX*cosd(Rot[i]))/R
r33 = -R*sind(Rot[i])
r[i] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]
end
end
On my end, I get:
ERROR: MethodError: Cannot `convert` an object of type Array{Float64,2} to an object of type Float64
This may have arisen from a call to the constructor Float64(...),
since type constructors fall back to convert methods.
Stacktrace:
[1] setindex!(::SharedArray{Float64,2}, ::Array{Float64,2}, ::Int64) at ./sharedarray.jl:483
[2] macro expansion at ./REPL[26]:6 [inlined]
[3] anonymous at ./<missing>:?
Essentially, Julia is telling you that it cannot assign a matrix to a SharedArray vector.
What are your options?
If you insist on having a Vector{Matrix} return type, then use r = Vector{Matrix{Float64}}(length(Xi)) in jt_transcoord. But you cannot use SharedArrays for this since Vector{Matrix} is not an admissible primitive type.
Alternatively, if you are willing to operate with tensors (i.e. 3-way arrays) then you can use pseudocode A below. But SharedArray computing will only help you if you carefully account for which process owns which portion of the tensor. Otherwise, the processes will need to communicate with each other, and your parallelized function could execute very slowly.
If you are willing to lay your 3x3 matrices in a 3n x 3 columnwise fashion, then you can use pseudocode B below.
Pseudocode A
function jt_transcoord_tensor(Xi, Yi, Zi, Xj, Yj, Zj, Rot, L)
# initialize array
r = Array{Float64}(3,3,length(Xi))
# r = SharedArray{Float64,3}((3,3,length(Xi))) # for SharedArrays
for i in 1:length(Xi)
# #parallel for i in 1:length(Xi) # for SharedArrays
# other code...
r[:,:,i] = [0 0 rxZ; cosd(Rot[i]) -rxZ*sind(Rot[i]) 0; sind(Rot[i]) rxZ*cosd(Rot[i]) 0]
# other code...
r[:,:,i] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]
end
end
return r
end
Pseudocode B
function jt_transcoord_parallel(Xi, Yi, Zi, Xj, Yj, Zj, Rot, L)
n = length(Xi)
r = SharedArray{Float64}((3*n,3))
#parallel for i in 1:length(Xi)
# other code...
r[(3*(i-1)+1):3*(i),:] = [0 0 rxZ; cosd(Rot[i]) -rxZ*sind(Rot[i]) 0; sind(Rot[i]) rxZ*cosd(Rot[i]) 0]
# other code...
r[(3*(i-1)+1):3*(i),:] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]
end
end
return r
end

Why do translations and scaling behave as expected but rotation does not in a naive transform implementation?

I've been encountering an odd issue with manually applying transformation matrices to 2D points. When I apply a translation or scaling to the transformation matrix, the resulting location of 2D point is where one would expect it to be. Rotation, on the other hand, seems to skew and rotate the shape in odd and unexpected ways.
I've been trying to figure out why this issue is occurring for a couple of days, but to no avail. I've checked the mathematics and it doesn't seem to be the cause of the problem (the composite transformation behaves as expected when applied via the transform attribute in SVG, but not when manually applied to each pixel), so I'm lead to believe it's a flaw in my understanding of how transformation matrices are actually applied to 2D points. I'm naively multiplying the 2D point with the transformation matrix to get the new point.
Why isn't rotation behaving as expected?
An example that illustrates this issue is available on jsfiddle. Ignore the inverted y axis as the issue manifests itself even if a correctly oriented rotation matrix is used. Here is the main part:
window.onload = function(e) {
var canvas = document.getElementsByTagName('canvas')[0];
if (canvas.getContext) {
var ctx = canvas.getContext('2d');
var ctm = [1.0, 0.0, 0.0, 1.0, 0.0, 0.0];
ctx.lineWidth = 0.0;
function matrix_mul(a, b) {
return [
a[1]*b[2] + a[0]*b[0],
a[1]*b[3] + a[0]*b[1],
a[3]*b[2] + a[2]*b[0],
a[3]*b[3] + a[2]*b[1],
b[4] + a[5]*b[2] + a[4]*b[0],
b[5] + a[5]*b[3] + a[4]*b[1]
];
}
function rotate(ang) {
ctm = matrix_mul(ctm, [Math.cos(ang), Math.sin(ang), -Math.sin(ang), Math.cos(ang), 0.0, 0.0]);
}
function paint_point(x, y) {
var x = ctm[0]*x + ctm[2]*y + ctm[4];
var y = ctm[1]*x + ctm[3]*y + ctm[5];
ctx.fillRect(x, y, 1, 1);
}
function square() {
for (i = 0; i <= 100; ++i) { paint_point(150+i, 150); }
for (j = 0; j <= 100; ++j) { paint_point(150, 150+j);
paint_point(250, 150+j); }
for (i = 0; i <= 100; ++i) { paint_point(150+i, 250); }
}
square();
for (d=0; d<3; ++d) {
rotate(Math.PI/14);
ctx.fillStyle = 'rgb(0, ' + d+30*40 + ', 0)';
square();
}
ctm = [1.0, 0.0, 0.0, 1.0, 0.0, 0.0];
ctx.fillStyle = 'rgb(0, 0, 255)';
rotate(Math.PI/14 * 3);
square();
} else {
alert("Something bad happend");
}
};
Just a simple typo in your transform function
function paint_point(x, y) {
var x = ctm[0]*x + ctm[2]*y + ctm[4];
var y = ctm[1]*x + ctm[3]*y + ctm[5];
ctx.fillRect(x, y, 1, 1);
}
You have declared x and y, but they are already defined as arguments. Thus x is modified on the first line making the second line use the incorrect value of x.
Fix is simple.
function paint_point(x, y) {
var x1 = ctm[0]*x + ctm[2]*y + ctm[4];
var y1 = ctm[1]*x + ctm[3]*y + ctm[5];
ctx.fillRect(x1, y1, 1, 1);
}
Or better
function paint_point(x, y) {
ctx.fillRect(
ctm[0]*x + ctm[2]*y + ctm[4],
ctm[1]*x + ctm[3]*y + ctm[5],
1, 1
);
}

How does this 2d noise generation function work? Does it have a name?

I came across this 2D noise function in the Book of Shaders
float noise(vec2 st) {
vec2 integerPart = floor(st);
vec2 fractionalPart = fract(st);
float s00 = random(integerPart);
float s01 = random(integerPart + vec2(0.0, 1.0));
float s10 = random(integerPart + vec2(1.0, 0.0));
float s11 = random(integerPart + vec2(1.0, 1.0));
float dx1 = s10 - s00;
float dx2 = s11 - s01;
float dy1 = s01 - s00;
float dy2 = s11 - s10;
float alpha = smoothstep(0.0, 1.0, fractionalPart.x);
float beta = smoothstep(0.0, 1.0, fractionalPart.y);
return s00 + alpha * dx1 + (1 - alpha) * beta * dy1 + alpha * beta * dy2;
}
It is clear what this function does: it generates four random numbers at the vertices of a square, then interpolates them. What I am finding difficult is understanding why the interpolation (the s00 + alpha * dx1 + (1 - alpha) * beta * dy1 + alpha * beta * dy2 expression) works. How is it interpolating the four values when it does not seem to be symmetric in the x and y values?
If you expand the last line, it's:
return s00 * (1-alpha) * (1-beta) +
s10 * alpha * (1-beta) +
s01 * (1-alpha) * beta +
s11 * alpha * beta;
Which is symmetric in x and y. If you add up the weights:
alpha * beta + (1-alpha) * beta + alpha * (1-beta) + (1-alpha) * (1-beta)
= (alpha + 1-alpha) * beta + (alpha + 1-alpha) * (1-beta)
= beta + 1-beta
= 1
so it's an affine combination of the values at the corners

Resources