How to know all attributes of "Canvas" in react-three-fiber? - three.js

I read the official document of Canvas in react-three-fiber.
official document of Canvas in react-three-fiber
There's only few attributes of Canvas. I saw someone's project. There are more attributes he uses in his code like:
<Canvas
concurrent
noEvents={false}
pixelRatio={window.devicePixelRatio}
camera={{ position: [0, 0, 2.5], fov: 69 }}
gl={{ antialias: true }}
onCreated={({ gl, scene }) => {
gl.toneMapping = THREE.ACESFilmicToneMapping
gl.outputEncoding = THREE.sRGBEncoding
//scene.background = new THREE.Color('#373740')
}}>...</Canvas>
so, how can I know the full introduction about Canvas in react-three-fiber?

When you don't have access to documentation or source files, just try outputting the JavaScript object to the console with console.log(). For example:
var c = <Canvas>...</Canvas>
console.log(c);
Then open your developer console, and you'll see an object with all its available properties, public methods, and anything else you may have access to.

Have a look at the documentation on GitHub. Here's a link specifically for the Canvas component.
https://github.com/pmndrs/react-three-fiber/blob/master/markdown/api.md#canvas
This page describes the API of react-three-fiber.

get the refrence of the canvas
const myCanvas = useRef();
console.log(myCanvas);
<Canvas ref={myCanvas} >
......
</Canvas>
get the reference then console out

Related

How can I add 2d text to three.js (already tried sprites, fontLoader, etc)?

I'm to add 2d text as labels next to an object, as shown in the image. I've tried sprites (which, as far as I understand, don't work in newer versions of three.js), fontLoader, and a couple of rendering mechanisms--but I have not had any success, unfortunately.
I see that I can use CSS3D, but I'm not sure what to grab from the sample codes. If someone could point me in the right direction, I would appreciate it.
Image showing what I'm trying to achieve
If anyone has any advice, I would greatly appreciate it.
The following are some key parts of my code:
<script src="https://unpkg.com/three#0.132.2/build/three.min.js"></script>
<script src="https://unpkg.com/three#0.132.2/examples/js/loaders/GLTFLoader.js"></script>
<script src="https://unpkg.com/three#0.132.2/examples/js/loaders/DRACOLoader.js"></script>
<script src="https://unpkg.com/three#0.132.2/examples/js/controls/OrbitControls.js"></script>
<canvas id="c"></canvas>
window.addEventListener('load', init);
function init() {
const width = window.innerWidth;
const height = window.innerHeight;
const canvasElement = document.querySelector('#c');
const renderer = new THREE.WebGLRenderer({
canvas: canvasElement,
});
renderer.setSize(width, height);
const scene = new THREE.Scene();
scene.background = new THREE.Color( 0xf5e8c6 );
loader.load('js/PricklyPearObject/scene.gltf', (gltf) => {
const model = gltf.scene;
model.scale.set( 0.1,0.1,0.1);
model.position.set(-2, 0, 0);
scene.add(model);
});
I tried using sprites, fontloader, and an approach using render but could not get them to work.
I'll show you a quick and easy way to achieve this when no experience with coding. You can use "Model Viewer". Go to: https://modelviewer.dev/
Click their Editor shown in the pic ->>
Then, drag your glTF or GLB model to the scene.
Then click "Edit" button, then "Add Hotspot" button. Add the text as a label. See the pic below:
Then you can setup the camera (initial position etc.), improve the lights and shadow, or add styles and customize the material.
When you are happy with your model, copy the "snippet code" to add it to your HTML (without forgetting to add the scripts), or "download the scene" as a Three.js project and check the code and run it with simply "Go Live" button in VS Code.
Hope it helps.

Attribution text not getting captured when using the image of the map canvas Mapbox-GL-JS

I am using ESRI basemaps with Mapbox-GL-JS. I am trying to capture a screenshot of the map using the following code:
this.map.getCanvas().toBlob(function (blob) {
canvasContext.strokeStyle = '#CCCCCC';
canvasContext.strokeRect(leftPosition, topPosition, width, height);
var img = new Image();
img.setAttribute("crossOrigin", "anonymous");
var srcURL = URL.createObjectURL(blob);
img.onload = function () {
canvasContext.drawImage(img, leftPosition, topPosition, width, height);
URL.revokeObjectURL(srcURL);
};
img.src = srcURL;
});
I am not able to figure out why the attribution on the Map is not getting captured in the screenshot. I understand that here I am just trying to get the canvas of the map. I even tried adding text elements to the map canvas and that doesn't work either. I have markers & routes, which get in the image correctly. I also tried using the Mapbox basemap and try the same, but faced the same issue.
Any help is highly appreciated!
map.getCanvas() will only return the Map's canvas not any of the HTML Elements which sit over the map like the controls, Mapbox logo or attribution text. Sam Murphy has been working on an example showing how to capture the Map including the Logo and Attribution text to an image which you can see at https://github.com/mapbox/mapbox-gl-js/pull/6518/files.
Since we can't easily capture an HTML Element to an image in JavaScript the attribution text is re-created in a canvas drawn into the Image.

three.js with customization

Is it possible to add a model(say ,a mobile phone) using three.js in html5 canvas and then make it customizable like adding text,image etc.( on mobile ) using any canvas library,so as to make an interactive 3d model.
Thanks.
You could use a library called Dat.GUI, which allows you to create a quick user interface that can accept plain text input fields, drop downs, select boxes, as well as numerical sliders. Here's an example of it being used with a text input field, which you could use to input a texture/image URL.
It's a really powerful library that can be further styled with CSS, if needed. This is all the code you need to get up and running (as you can see below, the object's properties get modified directly by Dat.GUI via invoking gui.add(object, 'property')):
<script type="text/javascript" src="dat.gui.js"></script>
<script type="text/javascript">
var FizzyText = function() {
this.message = 'dat.gui';
this.speed = 0.8;
this.displayOutline = false;
this.explode = function() { ... };
// Define render logic ...
};
window.onload = function() {
var text = new FizzyText();
var gui = new dat.GUI();
gui.add(text, 'message');
gui.add(text, 'speed', -5, 5);
gui.add(text, 'displayOutline');
gui.add(text, 'explode');
};
</script>
Yes. Three.js uses a canvas to render the 3D things and you can use any HTML tools to make the rest of that web page how you want.

how to use html content inside a canvas element

Can any one tell me how to place my html content on a canvas.And if we can do that, will the properties and events of those elements works or not, and also I have animations drawn on that canvas.
From this article on MDN:
You can't just draw HTML into a canvas. Instead, you need to use an
SVG image containing the content you want to render. To draw HTML
content, you'd use a element containing the HTML, then
draw that SVG image into your canvas.
It than suggest you follow these steps:
The only really tricky thing here—and that's probably an
overstatement—is creating the SVG for your image. All you need to do
is create a string containing the XML for the SVG and construct a Blob
with the following parts.
The MIME media type of the blob should be "image/svg+xml".
The element.
Inside that, the element.
The (well-formed) HTML itself, nested inside the .
By using a object URL as described above, we can inline our HTML
instead of having to load it from an external source. You can, of
course, use an external source if you prefer, as long as the origin is
the same as the originating document.
The following example is provided (you can see more information about this in this blog by Robert O'Callahan):
DEMO
const ctx = document.getElementById("canvas").getContext("2d");
const data = `
<svg xmlns='http://www.w3.org/2000/svg' width='200' height='200'>
<foreignObject width='100%' height='100%'>
<div xmlns='http://www.w3.org/1999/xhtml' style='font-size:40px'>
<em>I</em> like <span style='color:white; text-shadow:0 0 2px blue;'>CANVAS</span>
</div>
</foreignObject>
</svg>
`;
const img = new Image();
const svg = new Blob([data], {type: "image/svg+xml;charset=utf-8"});
const url = URL.createObjectURL(svg);
img.onload = function() {
ctx.drawImage(img, 0, 0);
URL.revokeObjectURL(url);
};
img.src = url;
<canvas id="canvas" style="border:2px solid black;" width="200" height="200"></canvas>
This example results in this HTML being rendered to canvas as this:
Will the properties and events of those elements works or not ?
No, everything drawn to a canvas is forgotten as passive pixels - they becomes simply an image.
You will need to provide custom logic that you provide yourselves in order to to handle any such things as clicks, objects, events etc. The logic need to define the areas, objects and anything else.

Kineticjs - Help uploading images to stage from input file

I am trying to allow users upload their own images to the kineticJS stage through an input in the html. I prefer to keep all my code in a separate js file, here is what i have so far:
$(document).ready(function() {
var stage = new Kinetic.Stage({
container: 'container',
width: 900,
height: 500
});
var layer = new Kinetic.Layer();
});
function addImage(){
var imageObj = new Image();
imageObj.onload = function() {
var myImage = new Kinetic.Image({
x: 140,
y: stage.getHeight() / 2 - 59,
image: imageObj,
width: 106,
height: 118
});
layer.add(myImage);
stage.add(layer);
}
var f = document.getElementById('uploadimage').files[0];
var name = f.name;
var url = window.URL;
var src = url.createObjectURL(f);
imageObj.src = src;
}
How do I expose the stage to the addImage() method? It is out of its scope at the moment and I havent been able to figure out how to solve the problem as the canvas doesn't show in the html until something is added to it. I need these images to be added as layers for future manipulation so want to use kineticJS. Any suggestions would be much appreciated!
http://jsfiddle.net/8XKBM/12/
I managed to get your addImage function working by attaching an event to it. If you use the Firebug console in Firefox or just press Ctrl+Shift+J you can get javascript errors. It turns out your function was being read as undefined, so now the alert is working, but your image isn't added because they aren't stored anywhere yet, like on a server (must be uploaded somewhere first)
I used jQuery to attach the event as you should use that instead of onclick='function()'
$('#addImg').on('click', function() {
addImage();
});
and changed
<div>
<input type="file" name="img" size="5" id="uploadimage" />
<button id='addImg' value="Upload" >Upload</button>
</div>
What you would really want to do is have the user upload the photos (to the server) on the fly using AJAX, (available with jQuery, doesn't interfere with KineticJS). Then, on success, you can draw the photo onto the canvas using your function. Make sure to use:
layer.draw()
or
stage.draw()
at the end of the addImage() function so that the photo is drawn on your canvas, as the browser does not draw the image until after the page loads and img.src is defined at the end. So, this will basically just require things to be in correct order rather than being difficult.
So, step 1: upload using AJAX (to server), step 2: add to stage, step 3: redraw stage

Resources