offset error after the 2nd texture access - opengl-es

I want my fragment shader to travers a serialized quad tree.
When a inner node is found the rg vales are interpreded as an index into the same texture.
A blue value of 0 marks an inner node.
In a first step a pointer is read from a 2x2 subimage at position 0x0 using the provided uv coords.
Then that pointer is used to access another 2x2 portion of the same texture.
However for each child of the root node there is an increasing offset error that results in the wrong color.
Here is my shader (for debug porpusses the loop is fixed at one iteration, so only 2 levels of the quad tree get accessed).
Also for debugging I did put a red 2x2 image at the location of the top left child a green image for the top right, blue for the bottom left and yellow for the bottom right child.
The resulting image is this:
I am completly clueless. Can one of you think of a reason why this is happening?
I checkt all the coordinate conversion and calculations 3 times they are all correct.
Here is the shader:
// virtual_image.fs
precision highp float;
uniform sampler2D t_atlas;
uniform sampler2D t_tree;
uniform vec2 gridpoolSize;
uniform vec2 atlasTileSize;
uniform vec2 atlasSize;
varying vec2 v_texcoord;
const float LEAF_MARKER = 1.0;
const float NODE_MARKER = 0.0;
const float CHANNEL_PERECISION = 255.0;
vec2 decode(const vec2 vec){
return vec * CHANNEL_PERECISION;
}
void main ()
{
vec4 oc = vec4(1); // output color
vec4 tColor = texture2D(t_tree, v_texcoord); // only for debuging
vec4 aColor = texture2D(t_atlas, v_texcoord); // only for debuging
// oc = mix(tColor, aColor, 0.5);
highp vec2 localCoord = v_texcoord;
// by convention the root node starts at [0,0]
// so we read the first pointer relative to that point
// we use the invertedGridpoolSize to convert the local coords in local coords of the first grid at [0,0]
highp vec3 pointer = texture2D(t_tree, localCoord / gridpoolSize).rgb;// pointer is correct at this point!
for(int i = 0 ; i < 1; i++) {
// divides the local coords into 4 quadrants
localCoord = fract(localCoord * 2.0); // localCoord is correct!
// branch
if(pointer.b <= NODE_MARKER + 0.1){
highp vec2 index = decode(pointer.rg);// index is correct!
highp vec2 absoluteCoord = (localCoord + index) / gridpoolSize;// absoluteCoord is correct!
// we have a inner node get next pointer and continue
pointer = texture2D(t_tree, absoluteCoord).rgb;
oc.rgb = pointer.rgb; // this point in the code introduces a growing offset, I don't know where this comes from. BUG LOCATION
//gl_FragColor = vec4(1,0,0,1);
} else {
if(pointer.b >= LEAF_MARKER - 0.1){
// we have a leaf
vec2 atlasCoord = ((decode(pointer.rg) * atlasTileSize) / atlasSize) + (localCoord * (atlasTileSize / atlasSize));
vec4 atlasColor = texture2D(t_atlas, atlasCoord);
//vec4 atlasCoordColor = vec4(atlasCoord,0,1);
//gl_FragColor = mix(atlasColor, vec4(localCoord, 0, 1), 1.0);
//gl_FragColor = vec4(localCoord, 0, 1);
oc = vec4(1,0,1,1);
} else {
// we have an empty cell
oc = vec4(1,0,1,1);
}
}
}
//oc.rgb = pointer;
//oc.rgb = oc.rgb * (255.0 / 20.0 );
gl_FragColor = oc;
}
For details on how to serialize a quad tree as a texture take a look at this paper: Octree Textures on the GPU

It turns out that its a rounding problem.
The code in the decode function hast to be changed to:
vec2 decode(const vec2 vec){
return floor(0.5 + (vec * CHANNEL_PERECISION))
}
The values returns should have been indexes of int but where slightly to small like 5.99 instead of 6.

Related

GLSL - different precision in different parts of fragment shader

I have a simple fragment shader that draws test grid pattern.
I don't really have a problem - but I've noticed a weird behavior that's inexplicable to me. Don't mind weird constants - they get filled during shader assembly before compilation. Also, vertexPosition is actual calculated position in world space, so I can move the shader texture when the mesh itself moves.
Here's the code of my shader:
#version 300 es
precision highp float;
in highp vec3 vertexPosition;
out mediump vec4 fragColor;
const float squareSize = __CONSTANT_SQUARE_SIZE;
const vec3 color_base = __CONSTANT_COLOR_BASE;
const vec3 color_l1 = __CONSTANT_COLOR_L1;
float minWidthX;
float minWidthY;
vec3 color_green = vec3(0.0,1.0,0.0);
void main()
{
// calculate l1 border positions
float dimention = squareSize;
int roundX = int(vertexPosition.x / dimention);
int roundY = int(vertexPosition.z / dimention);
float remainderX = vertexPosition.x - float(roundX)*dimention;
float remainderY = vertexPosition.z - float(roundY)*dimention;
vec3 dyX = dFdy(vec3(vertexPosition.x, vertexPosition.y, 0));
vec3 dxX = dFdx(vec3(vertexPosition.x, vertexPosition.y, 0));
minWidthX = max(length(dxX),length(dyX));
vec3 dyY = dFdy(vec3(0, vertexPosition.y, vertexPosition.z));
vec3 dxY = dFdx(vec3(0, vertexPosition.y, vertexPosition.z));
minWidthY = max(length(dxY),length(dyY));
//Fill l1 suqares
if (remainderX <= minWidthX)
{
fragColor = vec4(color_l1, 1.0);
return;
}
if (remainderY <= minWidthY)
{
fragColor = vec4(color_l1, 1.0);
return;
}
// fill base color
fragColor = vec4(color_base, 1.0);
return;
}
So, with this code everything works well.
I then wanted to optimize it a little bit by moving calculations that only concern horizontal lines after the vertical lines are drawn. Because these calculations are useless if the vertical lines check is true. Like this:
#version 300 es
precision highp float;
in highp vec3 vertexPosition;
out mediump vec4 fragColor;
const float squareSize = __CONSTANT_SQUARE_SIZE;
const vec3 color_base = __CONSTANT_COLOR_BASE;
const vec3 color_l1 = __CONSTANT_COLOR_L1;
float minWidthX;
float minWidthY;
vec3 color_green = vec3(0.0,1.0,0.0);
void main()
{
// calculate l1 border positions
float dimention = squareSize;
int roundX = int(vertexPosition.x / dimention);
int roundY = int(vertexPosition.z / dimention);
float remainderX = vertexPosition.x - float(roundX)*dimention;
float remainderY = vertexPosition.z - float(roundY)*dimention;
vec3 dyX = dFdy(vec3(vertexPosition.x, vertexPosition.y, 0));
vec3 dxX = dFdx(vec3(vertexPosition.x, vertexPosition.y, 0));
minWidthX = max(length(dxX),length(dyX));
//Fill l1 suqares
if (remainderX <= minWidthX)
{
fragColor = vec4(color_l1, 1.0);
return;
}
vec3 dyY = dFdy(vec3(0, vertexPosition.y, vertexPosition.z));
vec3 dxY = dFdx(vec3(0, vertexPosition.y, vertexPosition.z));
minWidthY = max(length(dxY),length(dyY));
if (remainderY <= minWidthY)
{
fragColor = vec4(color_l1, 1.0);
return;
}
// fill base color
fragColor = vec4(color_base, 1.0);
return;
}
But even while seemingly this should not affect the result - it does. By quite a bit.
Below are the two screenshots. The first one is the original code, the second - is the "optimized" one. Which works bad.
Original version:
Optimized version (looks much worse):
Notice how the lines became "fuzzy" even though seemingly no numbers should have changed at all.
Note: this isn't because minwidthX/Y are global. I initially optimized by making them local.
I also initially moved RoundY and remainderY calculation below the X check as well, and the result is the same.
Note 2: I tried adding highp keyword for each of those calculations specifically, but that doesn't change anything (not that I expected it to, but I tried nevertheless)
Could anyone please explain to me why this happens? I would like to know for my future shaders, and actually I would like to optimize this one as well. I would like to understand the principle behind precision loss here, because it doesn't make any sense to me.
For the answer I'll refer to OpenGL ES Shading Language 3.20 Specification, which is the same as OpenGL ES Shading Language 3.00 Specification in this point.
8.14.1. Derivative Functions
[...] Derivatives are undefined within non-uniform control flow.
and further
3.9.2. Uniform and Non-Uniform Control Flow
When executing statements in a fragment shader, control flow starts as uniform control flow; all fragments enter the same control path into main(). Control flow becomes non-uniform when different fragments take different paths through control-flow statements (selection, iteration, and jumps).[...]
That means, that the result of the derivative functions in the first case (of your question) is well defined.
But in the second case it is not:
if (remainderX <= minWidthX)
{
fragColor = vec4(color_l1, 1.0);
return;
}
vec3 dyY = dFdy(vec3(0, vertexPosition.y, vertexPosition.z));
vec3 dxY = dFdx(vec3(0, vertexPosition.y, vertexPosition.z));
because the return statement acts like a selection. And all the code after the code block with the return statement is in non-uniform control flow.

Compare current depth with older depth

We have a scene rendered to a WebGLRenderTarget with a depth texture which is being passed to a ShaderMaterial in a different scene as tDepth. Following is the fragment shader for that ShaderMaterial. It does a depth check and then renders onto the canvas. (I can just disable autoclear of depth and have it working, but I want a custom depth function, hence the code).
varying vec2 vUv;
uniform sampler2D tDepth;
uniform float width;
uniform float height;
void main() {
vec2 clipCoord = vec2(gl_FragCoord.x / width , gl_FragCoord.y / height)
float oldDepth = texture2D(tDepth, clipCoord).r; // Gets wrong value.
float currentDepth = gl_FragCoord.z;
if (oldDepth > currentDepth) {
gl_FragColor = vec4(1,0,0,0);
} else {
discard;
}
}
How do I get the correct value of oldDepth?

Drawing a circle in fragment shader

I am a complete noob when it comes to creating shaders. Or better said, I just learned about it yesterday.
I am trying to create a really simple circle. I thouht I finally figured it out but it turns out to be to large. It should match the DisplayObject size where the filter is applied to.
The fragment shader:
precision mediump float;
varying vec2 vTextureCoord;
vec2 resolution = vec2(1.0, 1.0);
void main() {
vec2 uv = vTextureCoord.xy / resolution.xy;
uv -= 0.5;
uv.x *= resolution.x / resolution.y;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.0),1.0);
}
Example using Pixi.js:
var app = new PIXI.Application();
document.body.appendChild(app.view);
var background = PIXI.Sprite.fromImage("required/assets/bkg-grass.jpg");
background.width = 200;
background.height = 200;
app.stage.addChild(background);
var vertexShader = `
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
`;
var fragShader = `
precision mediump float;
varying vec2 vTextureCoord;
vec2 resolution = vec2(1.0, 1.0);
void main() {
vec2 uv = vTextureCoord.xy / resolution.xy;
uv -= 0.5;
uv.x *= resolution.x / resolution.y;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.),1.0);
}
`;
var filter = new PIXI.Filter(vertexShader, fragShader);
filter.padding = 0;
background.filters = [filter];
body { margin: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.5.2/pixi.js"></script>
Pixi.js's vTextureCoord do not go from 0 to 1.
From the docs
V4 filters differ from V3. You can't just add in the shader and assume that texture coordinates are in the [0,1] range.
...
Note: vTextureCoord multiplied by filterArea.xy is the real size of bounding box.
If you want to get the pixel coordinates, use uniform filterArea, it will be passed to the filter automatically.
uniform vec4 filterArea;
...
vec2 pixelCoord = vTextureCoord * filterArea.xy;
They are in pixels. That won't work if we want something like "fill the ellipse into a bounding box". So, lets pass dimensions too! PIXI doesnt do it automatically, we need a manual fix:
filter.apply = function(filterManager, input, output)
{
this.uniforms.dimensions[0] = input.sourceFrame.width
this.uniforms.dimensions[1] = input.sourceFrame.height
// draw the filter...
filterManager.applyFilter(this, input, output);
}
Lets combine it in shader!
uniform vec4 filterArea;
uniform vec2 dimensions;
...
vec2 pixelCoord = vTextureCoord * filterArea.xy;
vec2 normalizedCoord = pixelCoord / dimensions;
Here's your snippet updated.
var app = new PIXI.Application();
document.body.appendChild(app.view);
var background = PIXI.Sprite.fromImage("required/assets/bkg-grass.jpg");
background.width = 200;
background.height = 200;
app.stage.addChild(background);
var vertexShader = `
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
`;
var fragShader = `
precision mediump float;
varying vec2 vTextureCoord;
uniform vec2 dimensions;
uniform vec4 filterArea;
void main() {
vec2 pixelCoord = vTextureCoord * filterArea.xy;
vec2 uv = pixelCoord / dimensions;
uv -= 0.5;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.),1.0);
}
`;
var filter = new PIXI.Filter(vertexShader, fragShader);
filter.apply = function(filterManager, input, output)
{
this.uniforms.dimensions[0] = input.sourceFrame.width
this.uniforms.dimensions[1] = input.sourceFrame.height
// draw the filter...
filterManager.applyFilter(this, input, output);
}
filter.padding = 0;
background.filters = [filter];
body { margin: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.5.2/pixi.js"></script>
It seems you've stumbled upon weird floating point precision problems: texture coordinates (vTextureCoord) in your fragment shader aren't strictly in (0, 1) range. Here's what I've got when I've added line gl_FragColor = vec4(vTextureCoord, 0, 1):
It seems good, but if we inspect it closely, lower right pixel should be (1, 1, 0), but it isn't:
The problem goes away if instead of setting size to 500 by 500 we use power-of-two size (say, 512 by 512), the problem goes away:
The other possible way to mitigate the problem would be to try to circumvent Pixi's code that computes projection matrix and provide your own that transforms smaller quad into desired screen position.

GLSL Check texture alpha between 2 vectors

I'm trying to learn how to make shaders, and a little while ago, I posted a question here : GLSL Shader - Shadow between 2 textures on a plane
So, the answer gave me the right direction to take, but I have some trouble for checking if there is a fragment that is not transparent between the current fragment and the light position.
So here is the code :
Vertex Shader :
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;
varying vec2 uvVarying;
varying vec3 normalVarying;
varying vec3 posVarying;
uniform vec4 uvBounds0;
uniform mat4 agk_World;
uniform mat4 agk_ViewProj;
uniform mat3 agk_WorldNormal;
void main()
{
vec4 pos = agk_World * vec4(position,1);
gl_Position = agk_ViewProj * pos;
vec3 norm = agk_WorldNormal * normal;
posVarying = pos.xyz;
normalVarying = norm;
uvVarying = uv * uvBounds0.xy + uvBounds0.zw;
}
And the fragment shader :
#ifdef GL_ES
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
#else
precision mediump float;
#endif
#endif
uniform sampler2D texture0;
uniform sampler2D texture1;
varying vec2 uvVarying;
varying vec3 normalVarying;
varying vec3 posVarying;
uniform vec4 uvBounds0;
uniform vec2 playerPos;
uniform vec2 agk_resolution;
uniform vec4 agk_PLightPos;
uniform vec4 agk_PLightColor;
uniform vec4 agk_ObjColor;
void main (void)
{
vec4 lightPos = agk_PLightPos;
lightPos.x = playerPos.x;
lightPos.y = -playerPos.y;
vec3 dir = vec3(lightPos.x - posVarying.x, lightPos.y - posVarying.y, lightPos.z - posVarying.z);
vec3 norm = normalize(normalVarying);
float atten = dot(dir,dir);
atten = clamp(lightPos.w/atten,0.0,1.0);
float intensity = dot(normalize(dir),norm);
intensity = clamp(intensity,0.0,1.0);
vec3 lightColor = agk_PLightColor.rgb * intensity * atten;
vec3 shadowColor = agk_PLightColor.rgb * 0;
bool inTheShadow = false;
if (intensity * atten > 0.05) {
float distanceToLight = length(posVarying.xy - lightPos.xy);
for (float i = distanceToLight; i > 0.0; i -= 0.1) {
vec2 uvShadow = ???
if (texture2D(texture0, uvShadow).a > 0) {
inTheShadow = true;
break;
}
}
}
if (texture2D(texture0, uvVarying).a == 0) {
if (inTheShadow == true) {
gl_FragColor = texture2D(texture1, uvVarying) * vec4(shadowColor, 1) * agk_ObjColor;
}
else {
gl_FragColor = texture2D(texture1, uvVarying) * vec4(lightColor, 1) * agk_ObjColor;
}
}
else {
gl_FragColor = texture2D(texture0, uvVarying) * agk_ObjColor;
}
}
So, this is the part where I have some troubles :
bool inTheShadow = false;
if (intensity * atten > 0.05) {
float distanceToLight = length(posVarying.xy - lightPos.xy);
for (float i = distanceToLight; i > 0.0; i -= 0.1) {
vec2 uvShadow = ???
if (texture2D(texture0, uvShadow).a > 0) {
inTheShadow = true;
break;
}
}
}
I first check if I'm in the light radius with intensity * atten > 0.05
Then I get the distance from the current fragment to the light position.
And then, I make a for loop, to check each fragment between the current fragment and the light position. I tried some calculations to get the current fragment, but with no success.
So, any idea on how I can calculate the uvShadow in my loop ?
I hope I'm using the good variables too, cause in the last part of my code, where I use gl_FragColor, I'm using uvVarying to get the current fragment (If i'm not mistaken), but to get the light distance, I had to calculate the length between posVarying and lightPos and not between uvVarying and lightPos (I made a test, where the further I was from the light, the more red it became, and with posVarying, it made me a circle with gradient around my player (lightPos) but when I used uvVarying, the circle was only one color, and it was more or less red, when I was approaching my player to the center of the screen).
Thanks and best regards,
Max
When you access a texture through texture2D() you use normalised coordinates. I.e. numbers that go from (0.0, 0.0) to (1.0, 1.0). So you need to convert your world positions to this normalised space. So something like:
vec2 uvShadow = posVarying.xy + ((distanceToLight / 0.1) * i * (posVarying.xy - lightPos.xy));
// Take uvShadow from world space to view space, this is -1.0 to 1.0
uvShadow *= mat2(inverse(agk_View)); // This could be optimized if you are using orthographic projection
// Now take it to texture space
uvShadow += 0.5;
uvShadow *= 0.5;

moving from one point to point on sphere

I'm working with a GPU based particle system.
There are 1 million particles computed by passing in the x,y,z positions as rgb values on a 1024*1024 texture. The same is being done for their velocities.
I'm trying to make them move from an arbitrary point to a point on sphere.
My current shader, which I'm using for the computation, is moving from one point to another directly.
I'm not using the mass or velocity texture at the moment
// float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
// vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
// vec3 acc = -0.0002*p; // Centripetal force
// vec3 ayAcc = 0.00001*normalize(cross(vec3(0, 1 ,0),p)); // Angular force
// vec3 new_v = v + mass*(acc+ayAcc);
vec3 new_p = p + ((moveToPos - p) / duration);
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
moveToPos is the mouse pointer as a float (0.0f > 1.0f)
the coordinate system is being translated from (0.5,0.5 > -0.5,-0.5) to (0.0,0.0 > 1.0,1.0)
I'm completely new to vector maths, and the calculations that are confusing me. I know I need to use the formula:
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
but calculating the angles from moveToPos(xyz) > p(xyz) is remaining a problem
I wrote the original version of this GPU-particles shader a few years back (now #: https://github.com/num3ric/Cinder-Particles). Here is one possible approach to your problem.
I would start with a fragment shader applying a spring force to the particles so that they more or less are constrained to the surface of a sphere. Something like this:
uniform sampler2D posArray;
uniform sampler2D velArray;
varying vec4 texCoord;
void main(void)
{
float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
vec3 v = texture2D( velArray, texCoord.st).rgb;
float x0 = 0.5; //distance from center of sphere to be maintaned
float x = distance(p, vec3(0,0,0)); // current distance
vec3 acc = -0.0002*(x - x0)*p; //apply spring force (hooke's law)
vec3 new_v = v + mass*(acc);
new_v = 0.999*new_v; // friction to slow down velocities over time
vec3 new_p = p + new_v;
//Render to positions texture
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//Render to velocities texture
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
}
Then, I would pass a new vec3 uniform for the mouse position intersecting a sphere of the same radius (done outside the shader in Cinder).
Now, combining this with the previous soft spring constraint. You could add a tangential force towards this attraction point. Start with a simple (mousePos - p) acceleration, and then figure out a way to make this force exclusively tangential using cross-products.
I'm not sure how the spherical coordinates approach would work here.
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
Where do you get ϕ and θ? The textures stores the positions and velocities in cartesian coordinates. Plus, converting back and forth is not really an option.
My explanation could be too advanced if you are not comfortable with vectors. Unfortunately, shaders and particle animation are very mathematical by nature.
Here is a solution that I've worked out - it works, however if I move the center point of the spheres outside their own bounds, I lose particles.
#define NPEOPLE 5
uniform sampler2D posArray;
uniform sampler2D velArray;
uniform vec3 centerPoint[NPEOPLE];
uniform float radius[NPEOPLE];
uniform float duration;
varying vec4 texCoord;
void main(void) {
float personToGet = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
float mass = texture2D( velArray, texCoord.st).a;
vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
vec3 vec_p = p - centerPoint[int(personToGet)];
float len_vec_p = sqrt( ( vec_p.x * vec_p.x ) + (vec_p.y * vec_p.y) + (vec_p.z * vec_p.z) );
vec_p = ( ( radius[int(personToGet)] /* mass */ ) / len_vec_p ) * vec_p;
vec3 new_p = ( vec_p + centerPoint[int(personToGet)] );
new_p = p + ( (new_p - p) / (duration) );
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
vec3 new_v = v;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, personToGet);
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, mass);
}
I'm passing in arrays of 5 vec3f's and a float mapped as 5 center points and radii.
The particles are setup with a random position at the beginning and move towards the number in the array mapped to the alpha value of the position array.
My aim is to pass in blob data from openCV and map the spheres to people on a camera feed.
It's really uninteresting visually at the moment, so will need to use the velocity texture to add to the behaviour of the particles.

Resources