Drawing 3D triangle over scene in Irrlicht - irrlicht

I would like to draw a custom red 3D triangle over my scene. I have followed some tutorials and came up with this code:
while(device->run()) {
driver->beginScene();
driver->setTransform(ETS_WORLD, matrix4());
driver->setMaterial(material);
driver->draw3DTriangle(myTriangle, SColor(0,255,0,0));
smgr->drawAll();
driver->endScene();
}
But this only show my 3D scene and there is no sign of a red triangle. I have checked its coordinates and they are good, I think it is only a render problem.

smgr->drawAll() will clean the whole screen and display your scene. Thus calling it after driver->draw3DTriangle() will erase your triangle. If you invert your render functions order this will work fine. See below:
while(device->run()) {
driver->beginScene();
smgr->drawAll();
driver->setTransform(ETS_WORLD, matrix4());
driver->setMaterial(material);
driver->draw3DTriangle(myTriangle, SColor(0,255,0,0));
driver->endScene();
}

Related

how to rotate the scene(qgraphicsscene) but not the view(qgraphicsview) in pyqtgraph plotwidget

I'm using pyqtgraph PlotWidget to draw something, that works well. But when I want to rotate the "view".
Here is the first pic, degree 0:
[1]: https://i.stack.imgur.com/gQnGd.png
Then after rotate with transform(codes later), degree 40 as example:
[2]: https://i.stack.imgur.com/qX12z.png
As marked in pic2, actually, after rotation, the "out of view" area is supposed to be filled with the grid and the item in invisible area of pic1 would show too.
Code:
#plotwidget initialized in ui file with a QFrame parent,
#which also the parent of buttons and sliders.
range = QRectF(0, 0, self.plotwidget.size().width(), self.plotwidget.size().height()).center()
transform=self.plotwidget.transform()
transform.translate(range.x(),range.y())
transform.rotate(angle)
transform.translate(-range.x(),-range.y())
self.plotwidget.setTransform(transform)
I checked the api of QGraphicsScene and QGraphicsView, only QGraphicsView has the rotate method, which actually the same as "rotate" with transform.
So I think the rotate of QGraphicsView or PlotWidget actually rotate the view widget itself and the QGraphicsScene in it meanwhile. But how to rotate the scene only?
Thanks for your help ahead.

How to fix Blue Screen appearing after GameObject is removed in Unity 2d Project

I have been successfully building and running a Unity 2D game, but started receiving a Blue Screen during one of my operations. Specifically, when I close a popup and remove all of its child Game Objects, the entire Game Screen turns dark blue (the default background color of the main camera). The music for the game still plays and clicks are still registered if you click in the right area (I can still press the back button, just can't see it).
If I remove 1 gameobject, this problem doesn't come up. But once I have to remove 2 game objects, the entire screen turns blue.
This is my function for removing my game object in case it helps, which works perfectly when it comes to actually removing the gameObjects correctly (game objects to be removed are created from prefabs). I think the problem may be with the camera for some reason, but I have no clue as to why it happens on this function.
public void Remove(int index)
{
float toggleWidth = toggle.GetComponent<RectTransform>().sizeDelta.x;
DestroyImmediate(scrollSnap.pagination.transform.GetChild(scrollSnap.NumberOfPanels - 1).gameObject);
scrollSnap.Remove(index);
scrollSnap.pagination.transform.position += new Vector3(toggleWidth / 2f, 0, 0);
}
I don't receive any errors or warnings in the console. Just a blue screen once more than one GameObject is removed
EDIT:
Turns out my main Canvas's planeDistance was being changed from 100 to 3200. I still have no clue as to why this change occurred...but for anyone else having a similar issue with a dark blue screen appearing in the middle or start of their game, then please check the values your canvas and camera in the Inspector. Simply controlling the planeDistance did the trick for me!
Made a new scene next to the sample scene when i started my sprite learning game. Forgot all about that.
When I had to publish it, In the Build Settings, I had the wrong scene selected. So It published an empty scene instead of my game.
I solved it by putting game in 3d mode and realized that the camera is positioned very far from the sprites, just change the z-axis and again go to 2d mode.
Just Move your camera in Z-axis position, it could be too far from object or it is behind the object, you can also check it by 3D mode the check the object position and change it in positive and negative values.

Mask UI Image/RawImage

I recently encountered a problem with UI. I opened a new 2D project and created a canvas with a GameObject with a Image component. I then added a sprite by right-clicking Assets > Create > Sprites > Circle. This added a circle sprite to my Assets folder.
The problem is when I choose the Source Image for the Image component as the circle, it still displays as a rectangle.
The circle sprite is imported as Sprite for Texture Type.
This problem also happens with the other shapes, such as triangle.
I am using Unity 5.6.0b9 Personal. Build target is PC, Mac, Linux Standalone.
I am probably missing something very simple. Any help is appreciated!
It doesn't work like that. Circle and all other type of Sprites under the Assets > Create > Sprites Menu are only made to work with SpriteRenderers. This would have worked if you used SpriteRenderer from GameObject--> 2D Object--> Sprite. They do not work with the UI.
For the UI, this has to be done with the Mask component. Just get any round image then use it to cut out circle from your target square image.
Create a UI Image called "TargetSquare" which is the image you want to round.
Duplicate it then name this "MaskCircle" and then resize it to make it smaller than the "TargetSquare" until that circle shape is what you want.
Make the "MaskCircle" object to be the parent of the "TargetSquare" object then use this round Sprite I made as it's source image.
Attach the Mask component to the "MaskCircle" object.
Done. Your "TargetSquare" object will have the shape of the "MaskCircle" object.
If you get jagged edges, select the sprite you used for the "MaskCircle" image and then make sure that Mipmap is disabled.
Maybe it is bug, in unity 5.6.1f1 same story.
Just try to download new version unity 5.6.2f. I dont know, that bug is fixed or not.
Or
Use some image editor, for example photoshop.
P.S. My fail, all ok, it works on SpriteRenderer component. Unity generates white square, and in this sprite properties setting Sprite Mode to Polygon and creates some shape using vertices.

orthographic view on object with combined camera on three.js

i am trying to use the combined camera (found it under "Examples").
the problem is when i use it in Orthographic mode i still see the arrow and the box helper like in perspective view.
for example, when i am trying to zoom in with the mouse scroll, i can see the plane in the same size (as it supposed to be in orthographic view) but the arrows and the small box between the arrows is getting smaller or bigger.
when i tried to debug it at the renderer function i saw the camera is still in orthograpic mode when it render the arrows.
any idea how can i make all the object to be in orthograpic view but still use the combined camera?
edit:
i am not sure which part of the code i should post so i add a picture to describe my problem.
you can see that i am in an orthographic camera and i'm trying to zoom in and i can see the axis arrow getting bigger or smaller.
the difference between the plane when zooming
Found a possible answer which worked for me:
under TransformControls.js
change the update function to:
scale = (camera.inOrthographicMode == true)? 100 : worldPosition.distanceTo(camPosition ) / 6 * scope.size;

Graphics editor: drawing and changing shapes (Windows GDI)

I need to draw, move, change shapes (rectangles, circles an so one) on canvas (represented by standard "Static Control"). All drawing operations are realized by standard GDI functions.
I've realized it like this:
(example for moving shape, other operations use the same principle)
...
// Before any actions set foreground mix mode:
SetROP2(hdc_, R2_NOTXORPEN);
...
void OnMouseDown(...)
{
SelectShapeUnderCursor();
}
void OnMouseMove(...)
{
...
DrawShape(old_points); // Eraise shape at old position (drawing with the same pen's color, see description of R2_NOTXORPEN mode)
DrawShape(new_points); // Draw shape at new position
...
}
void OnMouseUp(...)
{
DrawShape(new_points); // Final draw shape
}
In this case shapes correctly moving and changing. But great problem is bad colors of shapes. For example, when pen has green color, shape has green color on white background and red on black background. It's normal behavior for R2_NOTXORPEN mix mode.
But I want to shapes have same color as pen. I must refuse of R2_NOTXORPEN mix mode, but how to correctly realize operations like moving, changing shapes? I may use GDI+, if needed.
This is the way it was done back in the Windows 3.x days when all you had was a 386SUX. Now you just update the internal shape list and call InvalidateRect to let the WM_PAINT message handler re-render all shapes. No need for XOR tricks and its fugly side-effects. Double-buffer when it starts to flicker.

Resources