Using HTML Canvas from ReasonML using React Hooks - html5-canvas

I'm looking for a quick example on how to get started using the following technologies together:
HTML5 Canvas
ReasonML
ReasonReact: "ReasonReact is a safer, simpler way to build React components, in Reason."
bs-webapi: Web API bindings for Reason
React Hooks
To get me started a snippet that does the following would be great:
Manages a reference to the HTML5 Canvas element elegantly and correctly
Is a simple react component
Clears the canvas and draws something
I already have the basic ReasonML React project setup.

Here is a sample that shows one way to put everything together:
// Helper type to pass canvas size
type dimensions = {
width: float,
height: float,
};
// Actual drawing happens here, canvas context and size as parameters.
let drawOnCanvas =
(context: Webapi.Canvas.Canvas2d.t, dimensions: dimensions): unit => {
open Webapi.Canvas.Canvas2d;
clearRect(context, ~x=0., ~y=0., ~w=dimensions.width, ~h=dimensions.height);
setFillStyle(context, String, "rgba(0,128,169,0.1)");
fillRect(context, ~x=10.0, ~y=10.0, ~w=30.0, ~h=30.0);
};
// Extract canvas dimensions from canvas element
let canvasDimensions = (canvasElement: Dom.element): dimensions =>
Webapi.Canvas.CanvasElement.{
width: float_of_int(width(canvasElement)),
height: float_of_int(height(canvasElement)),
};
// An adapter to give nicer parameters to drawOnCanvas above
let drawOnCanvasElement = (canvasElement: Dom.element): unit =>
Webapi.Canvas.CanvasElement.(
drawOnCanvas(
getContext2d(canvasElement),
canvasDimensions(canvasElement),
)
);
[#react.component]
let make = () => {
open React;
let canvasElementRef: Ref.t(option(Dom.element)) = useRef(None);
useLayoutEffect0(() => {
Ref.current(canvasElementRef)
|> Belt.Option.map(_, drawOnCanvasElement)
|> ignore;
None;
});
<canvas
width="200"
height="100"
ref={ReactDOMRe.Ref.callbackDomRef(elem =>
React.Ref.setCurrent(canvasElementRef, Js.Nullable.toOption(elem))
)}
/>;
};
Here are some random links I used when learning how to do this. (Adding them here in case they are useful for others too.):
The bs-webapi test file to quickly learn the basics
A specific answer on how to use the setFillStyle (and where I learned the link to the test file above)
An answer in reason-react project showing how to work with React Refs.
The code has a bit more type declarations than necessary and some open
statements could be added, but I like my answers a bit on the verbose
side for a bit more instructiveness.
It should be relatively easy to shorten the code.
The intermediate functions canvasDimensions and drawOnCanvasElement add
a bit of structure to the code in my opinion, but I'm not sure if they make the sample more or less clear for readers or if there would be a more elegant way to work with the canvas size.

Related

Plotly.js, show tooltips outside of chart container

I need to implement a plotly.js chart on a page with a very restricted width. As a result, a tooltip is partially cut. Is it possible to cause tooltip not to be limited by plotly.js container size?
My code example at codepen: https://codepen.io/anatoly314/pen/gOavXzZ?editors=1111
//my single trace defined as following but it's better to see example at codepen
const yValue1 = [1000];
const trace1 = {
x: [1],
y: yValue1,
name: `Model 1`,
text: yValue1.map(value => Math.abs(value)),
type: 'bar',
textposition: 'outside'
};
It is, by design, not possible for any part of the chart to overflow its container.
I would say it is wrong to say that by design this is not possible! It is a bit hacky, but when you add the following lines, it shows the label outside of svg:
svg.main-svg,svg.main-svg *
{
overflow:visible !important;
}
The answer given by rokdd works. However the css selector should be more specific, otherwise it's natural that you will introduce subtle bugs (particularly if you need to scroll the content where the plotly chart is contained).
If we look at the DOM tree constructed by Plotly, we find that the tooltips are created inside the <g class="hoverlayer"></g> element (which is a direct child of one of the three <svg class="main-svg"></svg>). So that parent (that svg.main-svg element) is only one that needs to affected.
The ideal css selector in this case would be the :has selector. However it's still not supported (as of 2022): https://css-tricks.com/the-css-has-selector/
So the next simplest thing is to use a little bit of javascript right after we call Plotly.newPlot:
// get the correct svg element
var mainSvgEl = document.querySelector('#positive g.hoverlayer').parentElement;
mainSvgEl.style['overflow'] = 'visible';
Or in a more generic way (works for any chart):
Array.from(document.querySelectorAll('g.hoverlayer')).forEach(hoverEl => {
let mainSvgEl = hoverEl.parentElement;
mainSvgEl.style['overflow'] = 'visible';
});

Svelte and D3 brush

I'm struggling to understand how to use Svelte with something like D3's brush project. Svelte operates using a declarative approach. In the area chart example the SVG for lines is written out in the template HTML. To do this with D3 you would use Javascript function calls to select an element and call another function to modify the DOM. In the aforementioned chart example the D3 scaled library is only used to generate the axis array but the HTML itself is managed by Svelte. It makes sense Svelte works this way - building things up with function calls would be a lot less clean, but I can't figure out how to do this with the brush. How can I declaratively build up the brush HTML inside of my Svelte template, and how would this affect things like brush events? Would it rather be best to just use the brush functions inside of say onMount and sort-of tie change events to local Svelte variables?
The same problem exists in React, because both React and D3 want to be in charge of the DOM. In React you simply call the a function that instructs D3 to do it's work in the ComponentDidMount method (or a useEffect if using hooks.
Svelte expects to be in charge of the situation, you declare how the UI is constructed, and define the operations, leaving it to do the work. It won't be able to track what D3 does, so I suspect you need to just let D3 be in charge of that part, and not worry about it being a little bit hacky.
I managed to do this myself https://svelte.dev/repl/00f726facd434b978c737af2698e0dbc?version=3.12.1
As Mikkel said above, the way Svelte is designed doesn't play well naturally with something like D3. As I see it you have two options: try to wire D3 events into Svelte's reactive variables, or try to implement the functionality yourself.
I opted for the second version. I took the HTML and CSS that D3 Brush created, adding a mouse handler to the carets, and tied all the variables together reactively. (The last part I did very messily. Would appreciate any feedback on doing this cleaner from other Svelte users).
It also took me a bit to wrap my head around this. But in the end it's actually not that complicated. They key step is to decide which library takes care of which responsibility.
D3 simply overlaps with Svelte since it also renders things on the screen. But if you think about it, you don't really need a renderer if you already have Svelte. Once you have a renderer, the complicated part about charts is really the positioning. And this is where D3 really shines. If you "cherry pick" the best from both worlds you actually end up with a great dev experience.
But, alas, you can also leave the rendering to D3. But then you need to keep Svelte out of the picture as much as possible.
Essentially you have two options that both work well:
Svelte renders only the container DOM and then hands over to D3 to both calculate & render. You interface between both worlds only during onMount and onDestroy
Svelte renders the whole DOM, D3 provides the chart positions.
As for the brush functionality:
I found it to work best to create a ChartContainer (which is essentially just an SVG) with slots and then drop a Brush component inside.
<script>
import { createEventDispatcher } from "svelte";
export let minX;
export let maxX;
export let dX = 0;
export let height;
const dispatch = createEventDispatcher();
let startX,
endX,
mouseDown = false,
brushArea;
function onMouseDown(event) {
if (mouseDown) return;
mouseDown = true;
brushArea.removeEventListener("mousemove", onMouseMove);
brushArea.removeEventListener("mouseup", onMouseUp);
brushArea.addEventListener("mousemove", onMouseMove);
brushArea.addEventListener("mouseup", onMouseUp);
brushArea.style.cursor = "ew-resize";
startX = Math.max(event.offsetX - dX, minX);
endX = null;
}
function onMouseMove(event) {
endX = Math.min(event.offsetX - dX, maxX);
}
function onMouseUp(event) {
mouseDown = false;
if (!endX) startX = null;
brushArea.style.cursor = null;
brushArea.removeEventListener("mousemove", onMouseMove);
brushArea.removeEventListener("mouseup", onMouseUp);
const active = !!startX;
dispatch("brush", {active, startX, endX, clear});
}
function clear() {
startX = null;
endX = null;
}
</script>
<rect class="ui--chart__brush_area"
bind:this={brushArea}
x={minX}
y="0"
height={height}
width={maxX-minX}
on:mousedown={onMouseDown}
/>
{#if endX != null}
<rect class="ui--chart__brush"
x={startX < endX ? startX : endX}
y="0"
height={height}
width={startX < endX ? endX-startX : startX-endX}
/>
{/if}
The dX prop is used to consider a left margin. Might or might not be required in your use case (depending on how you setup your chart). The key thing is to be able to use offsetX from the mouse event so that you know how far off from the SVG borders your mouse events have fired.
Then, you just
listen for the brush events
extract the coordinates
convert them to values -by using yourScale.invert(coordinate)
use these values, e.g. to update your chart
Like so:
function onBrush(event) {
const {active, startX, endX, clear} = event.detail;
if (active) {
const startDate = scaleX.invert(startX);
const endDate = scaleX.invert(endX);
dispatch("brush", {active, startX, endX, startDate, endDate, clear});
}
}
Hope this helps anyone else struggling with this. Good luck!
There isn't a single answer to your question, but I think the best option is to render the "most relevant data" using svelte's html, leaving the interactive elements (like brush) to run only on client side.
You should known that svelte converts your html to js generators internally, so calling the d3 functions is actually pretty similar to what svelte does client side. The only real advantage of using the svelte's html instead of d3 functions calls is SSR, and that's the reason it's reasonable to leave the brush to be client side only (since it needs js to be interactive anyway).
Svelte is kind of a "reactive vanilla", so you can use low-level libraries almost directly. Sometimes you need to do some tricks to access DOM elements directly (like d3 usually does), for that I recommend using the bind:this directive. Example:
<script>
import { brushX, select } from 'd3';
//...
let brushElement;
$: brush = brushX()
.extent([[padding.left, padding.top], [width - padding.right, height - padding.bottom]])
.on('end', onZoom)
$: if (brushElement) {
select(brushElement)
.call(brush)
}
</script>
<svg>
...
<g bind:this={brushElement} width={...} height={...} />
...
</svg>
One thing to consider when using DOM's API is SSR (sapper), thus any call of d3's select should be done only in the browser. The if case in the code above takes care of that, because the bind:this directive will only set brushElement when running client side.

How to test react konva from cypress?

I have rectangles as screens on canvas using react-konva. How to test clicking the screens rectangle on testing tools like cypress that uses DOM Element to select the target element?
I am seeing that this is impossible, unless by creating screens DOM Element for testing purpose apart of what currently exists on the canvas. Somehow this will take a lot of time and cumbersome too.
So I wonder if we have a way to work around this to test objects that are drawn inside canvas itself?
Take a look into Konva testing code. Like https://github.com/konvajs/konva/blob/master/test/functional/MouseEvents-test.js
You can emulate clicks with this code (from here):
Konva.Stage.prototype.simulateMouseDown = function(pos) {
var top = this.content.getBoundingClientRect().top;
this._mousedown({
clientX: pos.x,
clientY: pos.y + top,
button: pos.button || 0
});
};
// the use it:
stage.simulateMouseDown({ x: 10, y: 50 });
But you have to find a way to access the stage instance for such testing. And I am not sure it is good in a cypress way, because its API is abstract and DOM-based.
Or you can try to trigger events with cypress:
cy.get(`.container > div`)
.trigger('mousedown', { clientX: x, clientY: y })

Multiple views/renders of the same kineticjs model

I am building a graph utility that displays a rather large graph containing a lot of data.
One of the things I would like to be able to support is having multiple views of the data simultaneously in different panels of my application.
I've drawn a picture to try and demonstrate what i mean. Suppose i've built the gradiented image in the background using kinetic.
I'd like to be able to grab show the part outlined in red and the part outlined in green simultaneously, without having to rebuild the entire image.
var stage1 = new Kinetic.Stage({
container: 'container1',
width: somewidth,
height: someheight
});
var stage2 = new Kinetic.Stage({
container: 'container1',
width: someotherwidth,
height: someotherheight
});
var Layer1 = new Kinetic.Layer({
y: someY,
scale: someScale
});
// add stuff to first layer here...
var Layer2 = new Kinetic.Layer({
y: otherY,
scale: otherScale
});
// add other stuff to second layer here...
stage1.add(mapLayer);
stage1.add(topLayer);
stage2.add(mapLayer);
stage2.add(topLayer);
at the point at which I've added my layers to stage1, everything is fine, but as soon as i try to add them to stage2 as well, it breaks down. I'm sifting through the source but I cant see anything forcing data to be unique to a stage. Is this possible? Or do i have to duplicate all of my shapes?
Adding a node into multiple parents is not possible by KineticJS design. Each Layer has <canvas> element. As I know it is not possible to insert a DOM element into document twice.

Draw Element's Contents onto a Canvas Element / Capture Website as image using (?) language

I asked a question on SO about compiling an image file from HTML. Michaƫl Witrant responded and told me about the canvas element and html5.
I'm looked on the net and SO, but i haven't found anything regarding drawing a misc element's contents onto a canvas. Is this possible?
For example, say i have a div with a background image. Is there a way to get this element and it's background image 'onto' the canvas? I ask because i found a script that allows one to save the canvas element as a PNG, but what i really want to do is save a collection of DOM elements as an image.
EDIT
It doesn't matter what language, if it could work, i'm willing to attempt it.
For the record, drawWindow only works in Firefox.
This code will only work locally and not on the internet, using drawWindow with an external element creates a security exception.
You'll have to provide us with a lot more context before we can answer anything else.
http://cutycapt.sourceforge.net/
CutyCapt is a command line utility that uses Webkit to render HTML into PNG, PDF, SVG, etc. You would need to interface with it somehow (such as a shell_exec in PHP), but it is pretty robust. Sites render exactly as they do in Webkit browsers.
I've not used CutyCapt specifically, but it came to me highly recommended. And I have used a similar product called WkHtmlToPdf, which has been awesome in my personal experience.
After many attempts using drawWindow parameters, that were drawing wrong parts or the element, I managed to do it with a two steps processing : first capture the whole page in a canvas, then draw a part of this canvas in another one.
This was done in a XUL extension. drawWindow will not work in other browsers, and may not work in a non-privileged context due to security reasons.
function nodeScreenshot(aSaveLocation, aFileName, aDocument, aCSSSelector) {
var doc = aDocument;
var win = doc.defaultView;
var body = doc.body;
var html = doc.documentElement;
var selection = aCSSSelector
? Array.prototype.slice.call(doc.querySelectorAll(aCSSSelector))
: [];
var coords = {
top: 0,
left: 0,
width: Math.max(body.scrollWidth, body.offsetWidth,
html.clientWidth, html.scrollWidth, html.offsetWidth),
height: Math.max(body.scrollHeight, body.offsetHeight,
html.clientHeight, html.scrollHeight, html.offsetHeight)
var canvas = document.createElement("canvas");
canvas.width = coords.width;
canvas.height = coords.height;
var context = canvas.getContext("2d");
// Draw the whole page
// coords.top and left are 0 here, I tried to pass the result of
// getBoundingClientRect() here but drawWindow was drawing another part,
// maybe because of a margin/padding/position ? Didn't solve it.
context.drawWindow(win, coords.top, coords.left,
coords.width, coords.height, 'rgb(255,255,255)');
if (selection.length) {
var nodeCoords = selection[0].getBoundingClientRect();
var tempCanvas = document.createElement("canvas");
var tempContext = tempCanvas.getContext("2d");
tempCanvas.width = nodeCoords.width;
tempCanvas.height = nodeCoords.height;
// Draw the node part from the whole page canvas into another canvas
// void ctx.drawImage(image, sx, sy, sLargeur, sHauteur,
dx, dy, dLargeur, dHauteur)
tempContext.drawImage(canvas,
nodeCoords.left, nodeCoords.top, nodeCoords.width, nodeCoords.height,
0, 0, nodeCoords.width, nodeCoords.height);
canvas = tempCanvas;
context = tempContext;
}
var dataURL = canvas.toDataURL('image/jpeg', 0.95);
return dataURL;
}

Resources