Programmatically paint an illuminated section of a speedometer - image

I am trying to make a gauge in Qt Quick that has sections that "light up" from 0 to the needle's position.
One way I can think of doing this is by having an image of the segment and painting it and rotating it many times from code. However, I don't know how this can be done in QML.
It doesn't have to be QML and it doesn't have to be Qt Quick; it could be anything as long as I can use it with Qt and within Qt creator and preferably works accross platforms.
Edit: Made rough sketch but StackOverflow requires me to have 10 reputation to post them, so I am placing links.
No segments illuminated ---------------------------- Some segments illuminated
-

You could easily use a Canvas element to draw an arc stroke with control over its start and end position. Just compose that below the scale of the gauge.
Here is an example how to do that using a value from 0 to 1 to select how "full" the gauge is.
ApplicationWindow {
visible: true
width: 500
height: 500
Canvas {
id: canvas
anchors.fill: parent
rotation: -90
onPaint: {
var c = getContext('2d')
c.clearRect(0, 0, width, height)
c.beginPath()
c.lineWidth = 30
c.strokeStyle = "red"
c.arc(250, 250, 250 - 15, 0, Math.PI * 2 * circ.value)
c.stroke()
}
}
Slider {
id: circ
minimumValue: 0
maximumValue: 1
value: maximumValue / 2
onValueChanged: canvas.requestPaint()
}
}
As requested by Mitch, I explain - the canvas is rotated at 90 CCW degree because of the way Qt draws arcs - they do not start at "12 o'clock" but at 3. You can remove the rotation of the canvas just in case you might want to draw extra stuff, cuz you wouldn't want to make all your drawing offset at 90 degree just to sit right with the rotated canvas, all you need to do to get rid of the rotation is draw the arc in range -Math.PI * 0.5 to Math.PI * 1.5 to account for the art starting at 3 o'clock.

Related

Processing rect() causes image to be draw instead of rectangle?

In my game there are enemies wandering around, their draw() method is simple:
core.displayBuffer is a PGraphics object that is draw onto the screen at the end of draw().
if(facingRight) {
core.displayBuffer.image(image,
x, y + offsetY, 80, 80);
} else {
float tX = -core.camera.x+core.game.width/2f + x;
float tY = -core.camera.y+core.game.height/2f+ y;
core.displayBuffer.pushMatrix();
core.displayBuffer.translate(core.camera.x-core.game.width/2f,
core.camera.y-core.game.height/2f);
core.displayBuffer.translate(tX, tY);
core.displayBuffer.scale(-1, 1);
core.displayBuffer.image(image,
-80, offsetY, 80, 80);
core.displayBuffer.popMatrix();
}
Then when we are going to draw walls, we just draw a coloured rectangle like this:
core.displayBuffer.noStroke();
if(destroyed) {
core.displayBuffer.fill(0, 0, 0, 16);
core.displayBuffer.rect(x, y, w, h);
} else {
core.displayBuffer.fill(64);
core.displayBuffer.rect(x, y - WALL_HEIGHT, w, h);
core.displayBuffer.fill(32);
core.displayBuffer.rect(x, y + h - WALL_HEIGHT, w, WALL_HEIGHT);
}
But for some reason, the walls have the texture of the enemies? Here's the loop in which the objects are drawn:
PMatrix displayMatrix = displayBuffer.getMatrix();
PMatrix bloomMatrix = bloomLayer.getMatrix();
PStyle displayStyle = displayBuffer.getStyle();
PStyle bloomStyle = bloomLayer.getStyle();
onScreenObjects.forEach(o -> {
displayBuffer.setMatrix(displayMatrix);
bloomLayer.setMatrix(bloomMatrix);
displayBuffer.style(displayStyle);
bloomLayer.style(bloomStyle);
o.draw(this);
});
displayBuffer.setMatrix(displayMatrix);
bloomLayer.setMatrix(bloomMatrix);
displayBuffer.style(displayStyle);
bloomLayer.style(bloomStyle);
Here's example of the results, red rectangles are around the walls, that are drawn incorrectly.
Also the bullets are flickering for some reason? These 2 bugs don't appear when I don't draw the enemies onto the screen (or I draw just rectangles instead), so that means, that the image() is doing something weird in the background?
Project's source code is at https://github.com/Matrx007/TheLostBits
Ask for additional info if needed!
Nvidia Quadro 4000.
Graphics card driver is from 2016, can't upgrade it, all other games are working fine tho.
Processing version: 3.5.3 (Library)
Operating System and OS version: Windows 10 build 17134
Possible Causes / Solutions:
Maybe that the image() manipulates the current texture being used and rect() uses the texture?
SOLVED
The solution was, that Processing can't draw onto more than one PGraphics at a time. I had beginDraw() called on two PGraphics and I was drawing to both of them at the same time, now I separated them, and the bug is gone! Better explanation here: https://github.com/processing/processing/issues/5863

How to correctly position html elements in three js coordinate system?

I hopefully have a simple problem I can't get an answer to.
I have three js geometric spheres which move in a box. I place this box at the centre of the scene. The mechanics of how the spheres stay in the box is irrelevant. What is important is the spheres move about the origin (0,0) and the canvas always fills the page.
I want to draw a line from the moving spheres to a div or img element on the page. To do this I would assume I have to transform the css coordinates to three js coordinates. I found something I thought did something like this (Note: Over use of somethings to signify I am probably mistaken)
I can add a html element to the same scene/camera as webgl renderer but obviously using a different renderer but I am unsure how to proceed from there?
Basically I want to know:
How should I change the size of the div preserving aspect ratio if need be?
In essence I want the div or element to fill screen at some camera depth.
How to place the div at the centre of the scene by default?
Mines seems to be shifted 1000 in z direction but this might be the size of the div(img) which I have to bring into view.
How to draw a line between the webgl sphere and html div/img?
thanks in advance!
Unfortunately you have asked 3 questions, it is tricky to address them all at once.
I will explain how to position DIV element on top of some 3D object. My example would be a tooltip that appears when you hover the object by mouse: http://jsfiddle.net/mmalex/ycnh0wze/
So let's get started,
First of all you need to subscribe mouse events and convert 2D coordinates of a mouse to relative coordinates on the viewport. Very well explained you will find it here: Get mouse clicked point's 3D coordinate in three.js
Having 2D coordinates, raycast the object. These steps are quite trivial, but for completeness I provide the code chunk.
var raycaster = new THREE.Raycaster();
function handleManipulationUpdate() {
// cleanup previous results, mouse moved and they're obsolete now
latestMouseIntersection = undefined;
hoveredObj = undefined;
raycaster.setFromCamera(mouse, camera);
{
var intersects = raycaster.intersectObjects(tooltipEnabledObjects);
if (intersects.length > 0) {
// keep point in 3D for next steps
latestMouseIntersection = intersects[0].point;
// remember what object was hovered, as we will need to extract tooltip text from it
hoveredObj = intersects[0].object;
}
}
... // do anything else
//with some conditions it may show or hide tooltip
showTooltip();
}
// Following two functions will convert mouse coordinates
// from screen to three.js system (where [0,0] is in the middle of the screen)
function updateMouseCoords(event, coordsObj) {
coordsObj.x = ((event.clientX - renderer.domElement.offsetLeft + 0.5) / window.innerWidth) * 2 - 1;
coordsObj.y = -((event.clientY - renderer.domElement.offsetTop + 0.5) / window.innerHeight) * 2 + 1;
}
function onMouseMove(event) {
updateMouseCoords(event, mouse);
handleManipulationUpdate();
}
window.addEventListener('mousemove', onMouseMove, false);
And finally see the most important part, DIV element placement. To understand the code it is essential to get convenient with Vector3.project method.
The sequence of calculations is as follows:
Get 2D mouse coordinates,
Raycast object and remember 3D coordinate of intersection (if any),
Project 3D coordinate back into 2D (this step may seem redundant here, but what if you want to trigger object tooltip programmatically? You won't have mouse coordinates)
Mess around to place DIV centered above 2D point, with nice margin.
// This will move tooltip to the current mouse position and show it by timer.
function showTooltip() {
var divElement = $("#tooltip");
//element found and mouse hovers some object?
if (divElement && latestMouseIntersection) {
//hide until tooltip is ready (prevents some visual artifacts)
divElement.css({
display: "block",
opacity: 0.0
});
//!!! === IMPORTANT ===
// DIV element is positioned here
var canvasHalfWidth = renderer.domElement.offsetWidth / 2;
var canvasHalfHeight = renderer.domElement.offsetHeight / 2;
var tooltipPosition = latestMouseProjection.clone().project(camera);
tooltipPosition.x = (tooltipPosition.x * canvasHalfWidth) + canvasHalfWidth + renderer.domElement.offsetLeft;
tooltipPosition.y = -(tooltipPosition.y * canvasHalfHeight) + canvasHalfHeight + renderer.domElement.offsetTop;
var tootipWidth = divElement[0].offsetWidth;
var tootipHeight = divElement[0].offsetHeight;
divElement.css({
left: `${tooltipPosition.x - tootipWidth/2}px`,
top: `${tooltipPosition.y - tootipHeight - 5}px`
});
//get text from hovered object (we store it in .userData)
divElement.text(hoveredObj.userData.tooltipText);
divElement.css({
opacity: 1.0
});
}
}

Projecting a point from world to screen. SO solutions give bad coordinates

I'm trying to place an HTML div element over a three.js object. Most stackoverflow solutions offer a pattern similar to this:
// var camera = ...
function toScreenXY(pos, canvas) {
var width = canvas.width, height = canvas.height;
var p = new THREE.Vector3(pos.x, pos.y, pos.z);
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * width;
vector.y = -(vector.y - 1) / 2 * height;
return vector;
}
I've tried many variations on this idea, and all of them agree on giving me this result:
console.log(routeStart.position); // target mesh
console.log(toScreenXY(routeStart.position));
// output:
//
// mesh pos: T…E.Vector3 {x: -200, y: 200, z: -100}
// screen pos: T…E.Vector3 {x: -985.2267639636993, y: -1444.7267503738403, z: 0.9801980328559876}
The actual screen coordinates for this camera position and this mesh position are somewhere around x: 470, y: 80 - I determined them by hardcoding my div position.
-985, -1444 are not even close to the actual screen coords :)
Please don't offer links to existing solutions if they follow the same logic as the snippet I provided. I would be especially thankful if someone could explain why I get these negative values, even though this approach seems to work for everyone else.
Here's a couple of examples using the same principle:
Three.js: converting 3d position to 2d screen position
Converting World coordinates to Screen coordinates in Three.js using Projection
Now, I've figured out the problem myself! Turns out, you can't project things before calling renderer.render(). It's very confusing that it gives you back weird negative coords.
Hope other people will find this answer useful.

CSS3Renderer ignores projectonMatrix property?

I'm doing augmented reality with Three.js and recenlty I tried to combine WebGL and CSS3 rendering to render both 3D content and DOM objects (Mostly for video playback) at the same time. I've started with Closing the gap between html and webgl tutorial, but I cannot get correct visualization using CSS (Although WebGL working fine).
Basically, when doing AR, we have two matrices we have to apply to our scene: projection matrix and camera matrix. The projection matrix (row-major) usually looks like this:
var projectionMatrix = [ 1.820090055466, 0, -0.000550820783, 0,
0, 3.227676868439, -0.036605358124, 0,
0, 0, -1.000199913979,-0.200020000339,
0, 0, -1, 0
];
And camera matrix (row-major) represents a rigid 3D transform (R|t composition) that represents camera transformation in virtual world:
var cameraMatrix = [ 0.790828585625,0.296402275562,-0.535477280617,-0.309822082520,
-0.612037420273,0.382129371166,-0.692378044128,-0.447699964046,
-0.000600785017,0.875284433365,0.483608126640,-0.637073278427,
0.000000000000,0.000000000000,0.000000000000,1.000000000000];
With WebGL it's pretty easy to apply these matrices to a pipeline:
self.wglCamera.matrixAutoUpdate = false;
self.wglCamera.projectionMatrix.set(
pm[0], pm[1], pm[2], pm[3],
pm[4], pm[5], pm[6], pm[7],
pm[8], pm[9], pm[10], pm[11],
pm[12], pm[13], pm[14], pm[15]);
self.wglCamera.matrix.set(
cm[0], cm[1], cm[2], cm[3],
cm[4], cm[5], cm[6], cm[7],
cm[8], cm[9], cm[10], cm[11],
cm[12], cm[13], cm[14], cm[15]);
When I do the same for CSS3 camera, I get incorrect rendering result (VIDEO):
There are two issues:
Red texture (CSS3Object) non-uniformly scaled (it's square in fact)
It always sits in screen center, however it should be located where a blue grid is.
After analyzing CSS3Renderer implementation, I found that only camera FOV property is used to set perspective effect, but the projectionMatrix property is totally ignored when rendering with CSS3Renderer. Is it intended?
// https://github.com/mrdoob/three.js/blob/master/examples/js/renderers/CSS3DRenderer.js#L225
this.render = function ( scene, camera ) {
var fov = 0.5 / Math.tan( THREE.Math.degToRad( camera.fov * 0.5 ) ) * _height;
...
camera.matrixWorldInverse.getInverse( camera.matrixWorld );
// Why we don't use camera.projection Matrix here?
var style = "translate3d(0,0," + fov + "px)" + getCameraCSSMatrix( camera.matrixWorldInverse ) +
" translate3d(" + _widthHalf + "px," + _heightHalf + "px, 0)";
...
};
And, if yes, how I can achieve desired result?
I've tried to pass PM * CM to camera matrix, but both problems still exists. Mainly I more worried about ignored translation, since rotation looks good.
I'd appreciate any ideas/suggestions! Thanks.

Rotate image on its own center kineticJS

I'm trying to rotate an image added to my canvas using KineticJS.
I got it almost working.
I know I need to set the offset to 'move' the rotation point, that part is working.
But it is also moving to that location of the offset.
After doing some rotating I can drag my image to another location in the canvas and continue rotating around its own center.
I don't want to rotate the whole canvas, because I have multiple images on a layer.
The relevant code:
function rotateLayer() {
// Rotate bird image
var rotation = 15;
// Set rotation point:
imageDict[1].setOffsetX(imageDict[1].width() / 2);
imageDict[1].setOffsetY(imageDict[1].height() / 2);
// rotation in degrees
imageDict[1].rotate(rotation);
imageDict[1].getLayer().draw();
}
A working demo is on jsfiddle: http://jsfiddle.net/kp61vcfg/1/
So in short I want the rotation but not the movement.
How you want to rotate without movement?
KineticJS rotate objects relative it's "start point" . For example for Kinetic.Rect start points is {0, 0} - top left corner. You may move such "start point" to any position with offset params.
After a lot of trail and error I found the solution.
The trick is to set the offset during load to the half width and height to set the rotation point to the middle of the image AND don't call image.cache:
function initAddImage(imgId, imgwidth, imgheight) {
var imageObj = new Image();
imageObj.src = document.getElementById(imgId).src;
imageObj.onload = function () {
var image = new Kinetic.Image({
image: imageObj,
draggable: true,
shadowColor: '#787878',
shadowOffsetX: 2,
shadowOffsetY: 2,
width: imgwidth,
height: imgheight,
x: 150, // half width of container
y: 150, // half height of container
offset : {x : imgwidth / 2, y : imgheight / 2}, // Rotation point
imgId: imgId
});
layer.add(image);
//image.cache();
layer.draw();
imageDict[currentLayerHandle] = image;
currentLayerHandle++;
};
}
I've updated my demo to a working version:
http://jsfiddle.net/kp61vcfg/2/

Resources