I have tried to port this https://codepen.io/jhnsnc/pen/qPZvvM
to pixi but without success.
What I have tried so far is the following
let video = document.createElement("video");
video.src = videoUrl;
video.loop = true;
video.muted = true;
var videoTexture = PIXI.Texture.fromVideo(video);
let uniforms = {
time: { type: "f", value: 1.0 },
texture: { type: "sampler2D", value: videoTexture }
};
this.videoSprite = new PIXI.Sprite(videoTexture);
this.videoSprite.filters = [new PIXI.Filter(document.getElementById( 'vertexShader' ).textContent,document.getElementById( 'fragmentShader' ).textContent,uniforms)]
<script id="vertexShader" type="x-shader/x-vertex">
// varying vec2 vUv;
void main()
{
// vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<script id="fragmentShader" type="x-shader/x-fragment">
#ifdef GL_ES
precision highp float;
#endif
uniform float time;
uniform sampler2D texture;
// varying vec2 vUv;
void main( void ) {
gl_FragColor = vec4(
texture2D(textureA, vec2(512.0, 0.5 + 256.0/2.)).rgb,
texture2D(textureA, vec2(512.0, 256.0/2.)).r
);
}
</script>
I'm getting error "castToBaseTexture is not a function"
This is how it has been done in pixi, 4 years ago but doesn't work anymore in pixi 4 and 5.
https://github.com/ENAML/pixi-alpha-video
Why does pixi.js webgl differs from three.js?
Because they are different libraries written by different people
They will all have different ways to specify attributes, uniforms, create textures, etc...
The names used in three.js like projectionMatrix, modelViewMatrix, position, uv, vUv, normal, vNormal, vColor etc.. are all names chosen by three.js, They could have been named foo, moo, bar, banana, etc. They are not part of WebGL anymore than the variables you declare in JavaScript are part of JavaScript.
The only variables that are defined by WebGL start with gl_ like gl_Position and gl_FragColor. All other variables are user defined, in this case the user being the developers of the three.js and pixi.js
if you want to use a shader designed for one library (three.js) in another library (pixi.js). you'll need to read through the docs and lookup what that particular library decided to name all of its variables.
Further, the way you provide values to those variables are entirely defined by the creators of those libraries. You may need to set up attributes, buffers, textures, uniforms, how to do that will be different in every library. Again, you'll need to read their docs or look through their source or examples to figure out what they want.
This is all no different than say React vs Vue vs Angular. They might all be built on the same tech (JS/HTML) but how you use them to do something is completely different. Similarly, both three.js and pixi.js might happen to use WebGL, but they are not WebGL, they are their own things and you need to read their docs/examples/source to figure out how to use them.
Related
I'm currently working on a personal project to generate a planet using procedural methods. The problem is that I am trying to achieve a glow effect using glsl. The intended effect works for desktop but not for mobile.
The following link illustrate the problem:
Intended Effect
iPhone6S result
The planet are composed as follows: Four IcosahedronBufferGeometry meshes composing earth, water, cloud and glow effect. I have tried disabling glow effect, then it works as intended for mobile. Therefore, the conclusion is that the problem lies within the glow effect.
Here are the code for the glow effect (fragment and vertex shader):
Vertex shader:
varying float intensity;
void main() {
/* Calculates dot product of the view vector (cameraPosition) and the normal */
/* High value exponent = less intense since dot product is always less than 1 */
vec3 vNormal = vec3(modelMatrix * vec4(normal, 0.0));
intensity = pow(0.2 - dot(normalize(cameraPosition), vNormal), 2.8);
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
Fragment Shader
varying float intensity;
void main() {
vec3 glow = vec3(255.0/255.0, 255.0/255.0, 255.0/255.0) * intensity;
gl_FragColor = vec4( glow, 1.0);
}
THREE.js Code
var glowMaterial, glowObj, glowUniforms, sUniforms;
sUniforms = sharedUniforms();
/* Uniforms */
glowUniforms = {
lightPos: {
type: sUniforms["lightPos"].type,
value: sUniforms["lightPos"].value,
}
};
/* Material */
glowMaterial = new THREE.ShaderMaterial({
uniforms: THREE.UniformsUtils.merge([
THREE.UniformsLib["ambient"],
THREE.UniformsLib["lights"],
glowUniforms
]),
vertexShader: glow_vert,
fragmentShader: glow_frag,
lights: true,
side: THREE.FrontSide,
blending: THREE.AdditiveBlending,
transparent: true
});
/* Add object to scene */
glowObj = new THREE.Mesh(new THREE.IcosahedronBufferGeometry(35, 4), glowMaterial);
scene.add(glowObj);
There are no error/warning messages in the console both for desktop and mobile using remote web inspector. As previously shown, it seems for mobile, the glow is plain white meanwhile for desktop, the intensity/color/opactiy of the material based on the value of the dot product in vertex shader works.
I want to render a texture to a plane using custom shaders. This texture has an 'offset' property set, that works correctly when I use a standard threejs material. However, I cannot figure out how to access these offsets in my custom fragment shader. It simply renders the whole texture over the whole plane:
shaders:
<script id="vertex_shader" type="x-shader/x-vertex">
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position,1.0);
}
</script>
<script id="fragment_shader" type="x-shader/x-fragment">
uniform sampler2D texture1;
varying vec2 vUv;
void main()
{
gl_FragColor = texture2D(texture1, vUv);
}
</script>
if I could somehow say something like:
gl_FragColor = texture2D(texture1, vUv + texture1.offset);
? Maybe that would work. But obviously that throws an error.
UPDATE:
so I sent the texture offset in as a uniform and that works. Dont know why I didn't think of that.
If I understand your question correctly, then the answer should be to add and use the uniform mat3 uvTransform; uniform to your fragment shader.
THREE will look for and populate that uniform with the texture transformation (which includes texture1.offset), when rendering the texture onto your geometry.
You should be able to access and extract the data supplied to texture1.offset to offset your texture sampling as follows:
<script id="vertex_shader" type="x-shader/x-vertex">
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
}
</script>
<script id="fragment_shader" type="x-shader/x-fragment">
// [UPDATE] The uv offset uniform we defined below
uniform vec2 uvOffset;
// [UPDATE] The texture uniform we defined below
uniform sampler2D texture;
varying vec2 vUv;
void main()
{
// [UPDATE] Apply offset to texture lookup
gl_FragColor = texture2D(texture, vUv + uvOffset);
}
</script>
You would then accompany the vertex and fragment shaders above, with the following THREE.ShaderMaterial:
<script>
var material = new THREE.ShaderMaterial({
uniforms: THREE.UniformsUtils.merge([
{
//Declare texture uniform in shader
texture: { type: 't', value: null },
//Declare texture offset in shader
uvOffset : { type : 'v', value : new THREE.Vector2(0,0) }
}
]),
vertexShader:
document.getElementById('vertexshader').textContent,
fragmentShader:
document.getElementById('fragmentshader').textContent
});
// Shader uniforms can be updated like so
material.uniforms.map.value = yourTexture;
material.uniforms.uvOffset.value = yourTextureOffsetVector2;
</script>
I'm programming shader materials on a smarthone with Three.js, so saving GPU performance is critical to me.
Situations:
Lots of planes are designed as UI interfaces in my application, and will play animations(shader animation) in the beginning, or in many user-interaction activities. After animation, those UIes are in a "static" status(a "static" image combined by shader), no need to do combination in shader again and again(yes, it renders, the "render" here means "shader multitexture, or program combination" ). If can't stop those meanless rendering, it'll consume a big percentage of GPU performence.
Questions:
As we knows, when any input values of the shader material are changed, whether "needupdate" is "false" or not, webGLRender will update the output material.
But, if no input value of the Uniforms is changed, will the webGLRender render it again, or just abandon the rendering? This is Question 1.
I've read the "webGLRender.js" code, with a bad understanding, still not very sure about how shader materials are rendered.(from code, i believe it keeps on rendering, while not very sure.)
The Question 2:
If webGLRender keeps rendering the shader material, even no input value is changed, is there a way to stop webGLRender from rendering the "unchanged" shader material, to save GPU performance?
Here's my simple test:
fragmentShader:
uniform float col;
void main() {
gl_FragColor = vec4(col, 0.58, 0.06, 1.0);
}
vertexShader:
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position.x+5.0, position.y, position.z+25.0, 1.0);
}
Javascript code:
var cube1, geo1, mat1;
function newbox2(){
geo1 = new THREE.BoxGeometry(12, 4, 2);
mat1 = new THREE.ShaderMaterial( {
uniforms: {
col: { type: 'f', value: 1.0 },
},
vertexShader: "void main() {gl_Position = projectionMatrix * modelViewMatrix * vec4(position.x+5.0, position.y, position.z+25.0, 1.0); }",
fragmentShader: "uniform float col; void main() {gl_FragColor = vec4(col, 0.58, 0.06, 1.0);}"
});
cube1 = new THREE.Mesh(geo1, mat1);
scene.add(cube1);
cube1.position.set(5, 5, 10);
}
Thanks.
If you have any kind of motion, animation, orbit controls and such, and a loop, the renderer will render everything every frame. Even your camera changing position is considered a change in the shader "input values" (uniforms).
the only way to achieve something like this is to not render again.
I would like to know how could I move shaders which are actually included in my html to external files. In this way, I'll be able to include them in my gulp tasks. I took a look at how JavaScript shaders files are written but I don't understand very well.
For example with the Glow shader code below, how could I move it to an external file ?
<script type="x-shader/x-vertex" id="vertexShaderGlow">
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main()
{
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<script type="x-shader/x-vertex" id="fragmentShaderGlow">
uniform vec3 glowColor;
varying float intensity;
void main()
{
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
</script>
The other answer provided is simply taking GLSL code and turning each line into a string. Each string is a value in an array. The join call is concatenating all of the strings with a \n character to make the code easier to read when debugging. I've done it this way many times before, and is a legitimate solution to what you're trying to do.
But if you'd rather have external files with raw GLSL code, you can do that, too. Consider the two files:
glow_vertex.glsl
glow_fragment.glsl
These files contain the shader code which you would normally have in the script tags. You can use an XMLHTTPRequest to fetch the files, and use the returned text as your shader code.
var vertexShader = null;
var fragmentShader = null;
function shadersDone(){
var material = new THREE.ShaderMaterial({
uniforms: { /* define your uniforms */},
vertexShader: vertexShader,
fragmentShader: fragmentShader
});
}
function vertexDone(code){
vertexShader = code;
if(fragmentShader !== null){
shadersDone();
}
}
function fragmentDone(code){
fragmentShader = code;
if(vertexShader !== null){
shadersDone();
}
}
var xhr1 = new XMLHttpRequest();
var xhr2 = new XMLHttpRequest();
xhr1.open("GET", "/server/glow_vertex.glsl", true);
xhr2.open("GET", "/server/glow_fragment.glsl", true);
xhr1.responseType = "text";
xhr2.responseType = "text";
xhr1.onload = function(){
if(xhr1.readyState === xhr1.DONE && xhr1.status === 200){
vertexDone(xhr1.resultText);
}
};
xhr2.onload = function(){
if(xhr2.readyState === xhr2.DONE && xhr2.status === 200){
fragmentDone(xhr2.resultText);
}
};
xhr1.send(null);
xhr2.send(null);
Note that that's all asynchronous. Also, your server is going to need to be configured to send GLSL files as plain text.
As long as we're talking about the modern web...
There is also the option to import your shader code. VERY BIG BUT it's currently only supported in Chrome and Opera (although polyfills do exist). Microsoft Edge lists the functionality as "under consideration," and Firefox does not indent to implement it in its current state. So take what follows with a large grain of salt, and keep an eye on: http://caniuse.com/#search=import
In your HTML, and before the JavaScript which would use it...
<link id="vertexImport" rel="import" href="/server/glow_vertex.glsl" />
<link id="fragmentImport" rel="import" href="/server/glow_fragment.glsl" />
Then later in your JavaScript:
var material = new THREE.ShaderMaterial({
vertexShader: document.getElementById("vertexImport").import.body.childNodes[0].data,
fragmentShader: document.getElementById("fragmentImport").import.body.childNodes[0].data,
});
Again, this is asynchronous. You may need to add an onload handler for each link, so you don't attempt to access the code before it's loaded.
The answer suggesting joining an array of glsl lines is a pattern that can be encountered with three, but should probably be avoided in this use case.
It may be useful in some kind of a module, where it's sort of a "compiled" snapshot of a shader, not intended to be modified.
Otherwise the main downsides of this approach is lack of syntax highlighting and being verbose.
Nowadays most js code is transformed one way or the other. Shader code should be inlined like this.
const myShader = new THREE.ShaderMaterial({
vertexShader: require('./myVert.vs'),
fragmentShader: require('./myFrag.vs'),
})
edit
myVert.vs:
//this is all nicely highlighted in sublime text for example
void main (){
gl_Position = vec4( position.xy, 0., 1.);
}
myVert.fs:
void main (){
gl_FragColor = vec4(1.,0.,0.,1.);
}
myClass.js:
class myMaterial extends THREE.ShaderMaterial{
constructor(){
super({
vertexShader: require('./myVert.vs'),
//^ becomes vertexShader: 'void main() {...}'
...
You can move the shader code into a separate JS file and include that file after three.js.
Here is one example from https://github.com/timoxley/threejs/blob/master/examples/js/shaders/ColorifyShader.js
/**
* #author alteredq / http://alteredqualia.com/
*
* Colorify shader
*/
THREE.ColorifyShader = {
uniforms: {
"tDiffuse": { type: "t", value: null },
"color": { type: "c", value: new THREE.Color( 0xffffff ) }
},
vertexShader: [
"varying vec2 vUv;",
"void main() {",
"vUv = uv;",
"gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
"}"
].join("\n"),
fragmentShader: [
"uniform vec3 color;",
"uniform sampler2D tDiffuse;",
"varying vec2 vUv;",
"void main() {",
"vec4 texel = texture2D( tDiffuse, vUv );",
"vec3 luma = vec3( 0.299, 0.587, 0.114 );",
"float v = dot( texel.xyz, luma );",
"gl_FragColor = vec4( v * color, texel.w );",
"}"
].join("\n")
};
With the above you would create our material like this:
material = new THREE.ShaderMaterial({
uniforms : THREE.ColorifyShader.uniforms,
vertexShader : THREE.ColorifyShader.vertexShader,
fragmentShader : THREE.ColorifyShader.fragmentShader
});
ofcourse you don't need to call the object THREE.ColorifyShader, you can call it whatever you want.
Basically I'm upgrading my app, from r52 to r55. This app use animations (Tweens) for updating lines, but also a ParticleSystem. Everything worked just fine in r52, scaling, moving and changing opacity.
I used these WebGLRenderer constructor settings:
clearColor: 0x1f1f1f
clearAlpha: 1
antialias: true
sortObjects: false
And a simple shader I took from the examples:
<script type="x-shader/x-vertex" id="vertexshader">
attribute float size;
attribute vec3 customColor;
attribute float customOpacity;
varying vec3 vColor;
varying float vOpacity;
void main() {
vColor = customColor;
vOpacity = customOpacity;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_PointSize = size * ( 300.0 / length( mvPosition.xyz ) );
gl_Position = projectionMatrix * mvPosition;
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
uniform vec3 color;
uniform sampler2D texture;
varying vec3 vColor;
varying float vOpacity;
void main() {
gl_FragColor = vec4( color * vColor, vOpacity );
gl_FragColor = gl_FragColor * texture2D( texture, gl_PointCoord );
}
</script>
I initialized the particle ShaderMaterial using:
blending : THREE.AdditiveBlending
depthTest : false
transparent : false
and the ParticleSystem by manually setting:
system.sortParticles = true
system.matrixAutoUpdate = true
system.visible = true
system.dynamic = true
So here how it renders in Three.js r52:
Now I've read the Migration wiki page, and concluded I only had to change a few names, nothing in the WebGLRenderer constructor, materials or shaders attributes.
I've upgraded to r55 and now visuals are broken:
Lines and particles are now all bright (opacity not taken in account).
Moreover for particles now the alpha mask is broken (if you watch carefully the color is different, and there is a "square cut" when overlapping with other particles, something I had in r52 and fixed by simply tuning the WebGLRender settings)
What could have changed? I tried to change settings in the WebGL constructor, alphas, background colors.. but it didn't help.
Likely, you need to set your shader material to transparent:
material.transparent = true;
three.js r.55