Is it possible to use Float64Array instead of Float32Array in THREE.BufferGeometry - three.js

I am having some trouble with BufferGeometry since it uses Float32Array to specify positions. The values i need to plot ( Using THREE.Points ) are large numbers for example "2732124.760877" and i loose most of the precision when using Float32Array and when i tried to use Float64Array instead the plot gets all jumbled up. Is there a way i can use Float64Array instead of Float32Array.
If you want to see what happens when you change from Float32Array to Float64Array try changing the Float32Array into Float64Array in the following jsfiddle (line 43)
buffer_geometry.addAttribute( 'position', new THREE.BufferAttribute( new Float64Array(lines * 3), 3 ));
buffer_geometry.addAttribute( 'color', new THREE.BufferAttribute( new Float32Array(lines * 3), 3 ));
http://jsfiddle.net/pulasthi/sr3r92hy/1/

no, look at the WebGLRenderer implementation
when geometry attributes are parsed it checks with this condition
else if ( array instanceof Float64Array ) {
console.warn("Unsupported data buffer format: Float64Array");
}
WebGL does not provide a way to pass double precision 64bit number arrays.
if you really need that precision and your GPU supports double precision numbers
you could somehow pass 2 32bit numbers you created bit by bit and in shader convert them into a double, but i have never tried something like this..

Related

Limit maximum zoom for a scene in Three.js

I have a Three.js App and I wanna limit the zoom for the scene, because logically at some zoom the user can get inside of my 3D object, which in my oppinion is not a really good UX.
I tried scene.maxZoom = number; but did not work. What can I do?
Here is the code: https://github.com/AlinAlexandruPeter/code/blob/main/code.js
You don't need Three.js to limit the range of numbers, just simple JavaScript. Use Math.min(a, b) to get the lower of the two values.
const MAX_ZOOM = 2.5;
// Clamp user input to 2.5 and below
camera.zoom = Math.min(userInput, MAX_ZOOM);
With this approach, if userInput = 3, it will get clamped at 2.5. There's also Math.max() if you want to clamp it on the lower range.

Three.js: Merge BufferGeometries while keeping attribute position separate for each geometry

In the application I develop I am trying to combine the closest dots of the same color, dots created from a 3D coordinates vector and using Points, by using a BufferGeometry with custom positions, positions set by using the setAttribute command of the geometry and giving it a Float32BufferAttribute(positionArray, 3) object. The problem I encountered is that I have a lot of such geometries (tens of thousands usually) and since I add each one separately to the group, I have big performance issues.
So I tried to merge the buffer geometries in a single one to draw all of them at once using BufferGeometryUtils.mergeBufferGeometries, but that didn't work.
How it looks without merging the geometries
How it looks with merged geometries
This is how I create the geometries:
const newGeometry = baseGeometry.clone();
newGeometry.setAttribute(
'position',
new Float32BufferAttribute(geometryPositions, 3)
);
newGeometry.setAttribute(
'color',
new Float32BufferAttribute(geometryColors, 3)
);
newGeometry.computeBoundingSphere();
geometryArray.push(newGeometry);
And I add them like this to my group.
geometryArray.forEach((e) => {
this.group.add(new Mesh(e, baseMaterial));
});
This is how I merge them and add them to the group.
const merged = BufferGeometryUtils.mergeBufferGeometries(geometryArray);
this.group.add(new Mesh(merged, baseMaterial));
As you can see the geometries use the same material in all cases, the color being defined in the colors attribute of each geometry and vertexColors is set to true on the MeshBasicMaterial.
For a single geometry in the array, the position/color data looks like this. The sizes can be random, and the array may or may not be empty depending if the points have neighbors. The format is 3D coordinates [x1,y1,z1,x2,y2,z2,.....].
const positions = {
itemSize: 3,
type: 'Float32Array',
array: [
4118.44775390625, -839.14404296875, 845.7374877929688, 4125.9306640625,
-808.6709594726562, 856.7002563476562, 4118.44775390625,
-839.14404296875, 845.7374877929688, 4129.93017578125, -870.6640625,
828.08154296875,
],
normalized: false,
};
const colors = {
itemSize: 3,
type: 'Float32Array',
array: [
0.9725490212440491, 0.5960784554481506, 0.03529411926865578,
0.9725490212440491, 0.5960784554481506, 0.03529411926865578,
0.9725490212440491, 0.5960784554481506, 0.03529411926865578,
0.9725490212440491, 0.5960784554481506, 0.03529411926865578,
],
normalized: false,
};
How could I improve the performance of the code above and keeping the custom positions and colors of each geometry intact?

Reading Pixels in WebGL 2 as Float values

I need to read the pixels of my framebuffer as float values.
My goal is to get a fast transfer of lots of particles between CPU and GPU and process them in realtime. For that I store the particle properties in a floating point texture.
Whenever a new particle is added, I want to get the current particle array back from the texture, add the new particle properties and then fit it back into the texture (this is the only way I could think of to dynamically add particles and process them GPU-wise).
I am using WebGL 2 since it supports reading back pixels to a PIXEL_PACK_BUFFER target. I test my code in Firefox Nightly. The code in question looks like this:
// Initialize the WebGLBuffer
this.m_particlePosBuffer = gl.createBuffer();
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, this.m_particlePosBuffer);
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, null);
...
// In the renderloop, bind the buffer and read back the pixels
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, this.m_particlePosBuffer);
gl.readBuffer(gl.COLOR_ATTACHMENT0); // Framebuffer texture is bound to this attachment
gl.readPixels(0, 0, _texSize, _texSize, gl.RGBA, gl.FLOAT, 0);
I get this error in my console:
TypeError: Argument 7 of WebGLRenderingContext.readPixels could not be converted to any of: ArrayBufferView, SharedArrayBufferView.
But looking at the current WebGL 2 Specification, this function call should be possible. Using the type gl.UNSIGNED_BYTE also returns this error.
When I try to read the pixels in an ArrayBufferView (which I want to avoid since it seems to be way slower) it works with the format/type combination of gl.RGBA and gl.UNSIGNED_BYTE for a Uint8Array() but not with gl.RGBA and gl.FLOAT for a Float32Array() - this is as expected since it's documented in the WebGL Specification.
I am thankful for any suggestions on how to get my float pixel values from my framebuffer or on how to otherwise get this particle pipeline going.
Did you try using this extension?
var ext = gl.getExtension('EXT_color_buffer_float');
The gl you have is webgl1,not webgl2.Try:
var gl = document.getElementById("canvas").getContext('webgl2');
In WebGL2 the syntax for glReadPixel is
void gl.readPixels(x, y, width, height, format, type, ArrayBufferView pixels, GLuint dstOffset);
so
let data = new Uint8Array(gl.drawingBufferWidth * gl.drawingBufferHeight * 4);
gl.readPixels(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight, gl.RGBA, gl.UNSIGNED_BYTE, pixels, 0);
https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/readPixels

adjusting row height in R image() function

I'm drawing several heatmaps using the image() function in R.
The sizes of the heatmaps are quite variable, so every heatmap has a different height, however I want the row heights be uniform across heatmaps.
So I create heatmaps from these two matrices, and the heights of each cell are different between two heatmaps:
m1<-replicate(40, rnorm(20))
image(1:ncol(m1), 1:nrow(m1), t(m1), axes = FALSE,xlab="",ylab="")
m2<-replicate(40, rnorm(10))
image(1:ncol(m2), 1:nrow(m2), t(m2), axes = FALSE,xlab="",ylab="")
For the life of me, I can't figure out how can I specify the row height. It must be a very easy fix, but I can't figure it out.
You give very limited information. E.g., do you want to create PDFs? Or place several plots on one page?
Here is one solution:
par(fin=c(5,5),mar=c(0,0,0,0))
image(1:ncol(m1), 1:nrow(m1), t(m1), axes = FALSE,xlab="",ylab="")
par(fin=c(5,2.5),mar=c(0,0,0,0))
image(1:ncol(m2), 1:nrow(m2), t(m2), axes = FALSE,xlab="",ylab="")
I am sure there are more elegant solutions depending on what you actually want to do with the graphs.
Just set a common maximum number of rows for all the heatmaps using the ylim parameter:
m1<-replicate(40, rnorm(20))
m2<-replicate(40, rnorm(10))
image(1:ncol(m1), 1:nrow(m1), t(m1), axes=FALSE, ann=FALSE, ylim=c(0, max(sapply(list(m1,m2),nrow)) ))
image(1:ncol(m2), 1:nrow(m2), t(m2), axes=FALSE, ann=FALSE, ylim=c(0, max(sapply(list(m1,m2),nrow)) ))
You may want to manually specify the ylim argument and have that be the same between the 2 plots:
par(mfrow=c(1,2))
image( 0:ncol(m1), 0:nrow(m1), t(m1), axes=FALSE, xlab='', ylab='',
ylim=c(0,nrow(m1)) )
image( 0:ncol(m2), 0:nrow(m2), t(m2), axes=FALSE, xlab='', ylab='',
ylim=c(0,nrow(m1)) )

Google maps polygon optimization

I extracted country outline data from somewhere and successfully managed to convert it into an array of lat-lng coordinates that I can feed to Google maps API to draw polyline or polygons.
The problem is that that there are about 1200+ points in that shape. It renders perfectly in Google maps but I need to reduce the number of points from 1200 to less than 100. I don't need a very smooth outline, i just need to throw away the points that I can live without. Any algorithm or an online tool that can help me reduce the number of points is needed.
Found this simple javascript by Bill Chadwick. Just feed in the LatLng to an array and pass in to the source arguments in a function here Douglas Peucker line simplification routine
it will output an array with less points for polygon.
var ArrayforPolygontoUse= GDouglasPeucker(theArrayofLatLng,2000)
var polygon=new google.maps.Polygon({
path:ArrayforPolygontoUse,
geodesic:true,
strokeColor:"#0000FF",
strokeOpacity:0.8,
strokeWeight:2,
fillColor:"#0000FF",
fillOpacity:0.4,
editable:true
});
theArrayofLatLng is an array of latlng that you collected using google maps api.
The 2000 value is kink in metres. My assumption is, the higher the value, more points will be deleted as an output.
For real beginners:
Make sure you declare the js file on your html page before using it. :)
<script type="text/javascript" src="js/GDouglasPeucker.js"></script>
I think MapShaper can do this online
Otherwise, implement some algorithm
If you can install postgis which i think is easy as they provide an installer then you can import the data and execute snaptogrid() or st_simplify() for which i cannot find an equivalent in mysql.If you decide to go with postgis which i recommend cause it will help you down the road i can provide you with the details.
Now for an easy custom solution you can reduce size by cutting or rounding some of the last digits of the coords and then merge the same coords resulting actually in a simple snaptogrid().
Hope it helps
I was looking for exactly the same thing and found Simplify.js. It does exactly what you want and is incredibly easy to use. You simply pass in your coordinates and it will remove all excess points.
simplify(points, tolerance, highQuality)
The points argument should contain an array of your coordinates formatted as {x: 123, y: 123}. (Afterwards you can convert it back to the format you wish.)
The tolerance should be the precision in decimal degrees. E.g. 0.0001 for 11 meters. Increasing this number will reduce the output size.
Set highQuality to true for better results if you don't mind waiting a few milliseconds longer.
Mostly likely what you want to divide the points into 2 half and want to try my Javascript function:
function shortenAndShow ( polyline, color ) {
var dist = 0, copyPoints = Array ( );
for ( var n = 0, var end = polyline.getVertexCount ( ) - 1; n < end ; n++ ) {
dist += polyline.getVertex ( n ).distanceFrom ( polyline.getVertex ( n +1 ) );
copyPoints.push ( polyline.getVertex (n) );
}
var lastPoint = copyPoints [copyPoints.length-1];
var newLine = new GPolyline (copyPoints, color, 2, 1);
gmap2.addOverlay ( newLine );
}
I agree the Unreason's anwser,The website support GeoJson,I used it in my website,and it cut down my geoJson ,But I think you also need this world country geo Json

Resources