I have compiled OpenJDK in ubuntu and installed the JRE to another Linux system.
I am creating a Chart image using jFreechart using openjdk and saving it as JPG. The output image is having only the white background and the chart lines are missing. The same code works fine in Windows with openjdk.
I guess some dependencies are missing but i am not able to find it. Could any one list the dependencies of openjdk. It has requirement of native libraries in Linux.
I am also getting an exception related to Fonts (sun.awt.X11FontManager.getDefaultPlatformFont(X11FontManager.java:779)). What is the default font location for OpenJDK? It is not looking in to the fontconfig.properties file.
While I don't know how to help with your exact problem, you could consider trying a different Java Charting API. I prefer XChart, and if you just need a line or scatter chart and PNG format is OK, it would be very easy to just try it out on Ubuntu quickly. Taken from here (Example 1), this is all the code you would need:
// Sample Data
Collection<Number> xData = Arrays.asList(new Number[] { 0.0, 1.0, 2.0 });
Collection<Number> yData = Arrays.asList(new Number[] { 0.0, 1.0, 2.0 });
// Create Chart
Chart chart = new Chart(500, 400);
chart.setChartTitle("Sample Chart");
chart.setXAxisTitle("X");
chart.setYAxisTitle("Y");
chart.addSeries("y(x)", xData, yData);
BitmapEncoder.savePNG(chart, "./Sample_Chart.png");
Related
I'm testing the example https://github.com/petrbroz/forge-potree-demo trying to import a NWD and point cloud into the Forge Viewer but can't make them match the coordinates.
I confirmed on Navisworks that both PC and model are aligned at 0,0,0. But they are displaced more than 200' in the Viewer.
What we did tried
Point cloud was converter using Potree converted 1.6 and 1.7 https://github.com/potree/PotreeConverter/releases/tag/1.7
The point cloud is scaled from meters to feet using let scale = new THREE.Vector3(3.28084,3.28084,3.28084);.
The models are being inserted with a global offset of 0,0,0. globalOffset: {x:0,y:0,z:0}
We created a sphere at a know point directly in Threejs using coordinates in foots and it matches the location on the NWD.
We found that potree converter is not 100% compatible with the sample, a quick fix is needed on the line 464 to read categories created by the converter.
var pointAttributeName = pointAttributes[i].name.toUpperCase().replaceAll(' ','_');
Also we needed to add a new category around line 434
PointAttribute.RGBA = new PointAttribute(PointAttributeNames.COLOR_PACKED, PointAttributeTypes.DATA_TYPE_INT8, 4);
Without those two latest fixes we can't load the point cloud.
What are we missing?
We can override the location by hand but the final coordinates doesn't make sense with the project. Too arbitrary. Could the two fixes be the problem? What should be an alternative?
We found the problem, potree was using a bounding box based on absolute units from the actual point cloud bounding box.
This makes sense if you want the point cloud to be on the center of the scene. But not at our case.
So, if you override this feature by changing potree.js bounding box set from
boundingBox.min.sub(offset);
boundingBox.max.sub(offset);
tightBoundingBox.min.sub(offset);
tightBoundingBox.max.sub(offset);
to
tightBoundingBox.min.sub(offset);
tightBoundingBox.max.sub(offset);
boundingBox.copy(tightBoundingBox);
the point cloud will align with the model.
FYI: The fixes we made where actually making us able to run potree converter 1.7 instead of 1.6 as specified in the demo readme.
I'm forwarding this open issue from A-Frame Github page hoping that some people may know what's going on with this issue. It's still not working with A-Frame 1.0.3 and I don't really know where to search. There are two examples on the Github issue, one working with A-Frame 0.8.0 and the other one with A-Frame 0.9.0. There are absolutely no warnings or information in the javascript console so it's kinda hard to find where the issue can be.
On version 0.8.0 it used Three v90 and on 0.9.0 it used v101. So maybe something was made between these two versions on Three but I don't understand what.
Did anybody find a way to use videos with transparency on recent versions of A-Frame and/or Three.js ?
Thanks for your help :)
It seems that the assigned texture format doesn't have the alpha channel (THREE.RGBFormat, you can log and check the values here).
You can resolve the issue by changing the format of the video texture to THREE.RGBAFormat:
videoTexture.format = THREE.RGBAFormat
within a custom component containing a fix like this:
// wait until the material is ready
this.el.addEventListener('materialtextureloaded', e => {
// grab the material
let material = this.el.getObject3D("mesh").material;
// swap the format
material.map.format = THREE.RGBAFormat;
material.map.needsUpdate = true;
})
working glitch here.
I'm trying to implement normal maps in a ThreeJS fragment shader, but it seems as though a key feature, computeTangents, has been lost in recent versions.
Here is a working demo using an older version of Three:
http://coryg89.github.io/technical/2013/06/01/photorealistic-3d-moon-demo-in-webgl-and-javascript/
ThreeJS is using computeTangents() to add an attribute called "tangents" to the vertex which is sent to the shader.
So I researched as much as I could and tried using a shader only method of computing the tangent, but this requires dFdx which causes an error about GL_OES_standard_derivatives in the shader on my Macbook Pro.
Then I tried converting the geometry from a simple cube to buffer geometry for use in the BufferGeometryUtils.computeTangent() function, in order to generate the tangents there, but that requires "indexed geometry", which isn't present in basic geo created by ThreeJS.
From the original demo, this is the line I need to recreate using the latest ThreeJS:
var mesh = new THREE.Mesh(geo, mat);
mesh.geometry.computeTangents();
Repo here:
https://github.com/CoryG89/MoonDemo
Is it possible to get this demo working using the new version of Three?
I found the answer to this. For the demo above, it required changing the THREE.SphereGeometry to THREE.SphereBufferGeometry.
var geo = new THREE.SphereBufferGeometry(radius, xSegments, ySegments);
Then I had to add the BufferGeometryUtils.js file and use the following code:
THREE.BufferGeometryUtils.computeTangents( geo );
This got the demo working again.
I've been experimenting with OpenGL on OSX with a ES implementation as reference code. The goal is to render an image buffer (CVImageBuffer) which is in the yuv format.
I need to know how to specify the color format (or is it fixed to a particular type?) of the texture that gets created using CVOpenGLTextureCacheCreateTextureFromImage() API. I'm trying to figure this out so that I can appropriately access/process the colors in the fragment shader (RGB vs YUV).
Also, I can see that there is an "internalFormat" option that can be used to control this in the OpenGL-ES version of the API.
Thanks!
I wrote a little tool with node-webkit. One reason I chose node-webkit is the fact that it is easy to distribute your app to all major plattforms.
Something I would love to do now, is to resize a bunch of images located on the file storage.
I found plenty of packages which do this via ImageMagick. This would require the user to have ImageMagick installed, which is bad...
Using a webservice is no option. There can easily be around 600 images.
If there is no solution, I will only run that task IF imagemagick is installed.
You could use the canvas tag to resize your image.
Load the image in a canvas with the new size:
...
var tempCanvas = document.createElement('canvas');
tempCanvas.width = newWidth;
tempCanvas.height = newHeight;
var ctx = tempCanvas.getContext('2d');
var img = new Image();
img.src = imageSrc;
img.onload = function () {
ctx.drawImage(this, 0, 0);
};
Get resized image back from canvas:
...
var image = canvas.toDataURL('image/png');
image = image.replace('data:image/png;base64,', '');
var buffer = new Buffer(image, 'base64');
fs.writeFile('filename.png', buffer, function (error) {
if (error) {
// TODO handle error
}
});
...
In this example the resulting image will be a png. You can use as result type whatever node-webkit supports. If you have different image types as input and want to output in the same type you need to add some code that sets the correct mime type to canvas.toDataURL.
I am developing an image processing module without any runtime dependencies; which means your users don't need to have imagemagick installed. It's still at early stages, but already usable.
Part of the module is written in C++, so you'll have to make sure to npm install the module on each platform you package your app (better than telling your users to pre-install imagemagick, imho). node-webkit apps are distributed per-platform anyway, so it shouldn't be a problem. Note, though, that I haven't yet tested it with node-webkit,
With this module, resizing an image is as simple as:
image.batch().resize(200, 200).writeFile('output.jpg',function(err){
// done
});
More info at the module's Github repo.