Is this possible with Unity3D? ...
I have a Texture2D in a RawImage that is used to display on a UI Canvas and pixels are drawn into it with SetPixel() and then I want to scroll the Texture2D pixel by pixel. I don't want to use any material or fancy stuff as this code should be very efficient and lightweight. Can this be done somehow?
I ended up creating a wrapper class for Texture2D that adds API to better work with pixel manipulation tasks. It still all uses GetPixels/SetPixels since there isn't really much more API than that in Unity3D currently. But it's better than nothing. If anyone has a better method or suggestions it would be cool if you can share them.
Here's the class named Bitmap2D https://gist.github.com/hexagonstar/be39f847a4840c838500
Related
May I have a 2D layer for UI, Text, Buttons, etc over the 3D scene in ThreeJS?
Ideally something like engine from PixiJS inside ThreeJS? I've seen PixiJS offers some 3D features so why not combine both libraries in something super-powerful? I just do not want to place any HTML Dom elements over WebGL canvas as this will probably slow down performance on Mobile devices.
One way to solve this issue is to implement the UI as screen space sprites like demonstrated in the following official example (check out how the red sprites are rendered):
https://threejs.org/examples/webgl_sprites
The idea is to render them with a separate orthographic camera and an additional call of WebGLRenderer.render(). Besides, instances of THREE.Sprite do support raycasting which is of course useful when implementing interaction.
Building up on Mugen87's answer, you can also use THREE.Shape to make visual containers adapted to the user screen size :
https://threejs.org/docs/#api/en/extras/core/Shape
You can use THREE.Shape to make mesh-based text, is illustrated in this example :
https://threejs.org/examples/?q=text#webgl_geometry_text_shapes
You should also have a look at three-mesh-ui, an add-on for building mesh-based user interface with three.js :
https://github.com/felixmariotto/three-mesh-ui
I want to render some graphics, texts and shapes to a bitmap using a canvas-like API like nanovega in D on my server.
I know how to create an arsd window with OpenGL context and render to it, (as per documentation) but is it also possible rendering to a headless context or even directly to draw to some memory buffer? (Because I don't think my server has OpenGL available and would probably need to use some software renderer)
Mainly I wanted to use gradients, texts, shapes, rounded corners, images and masks and render all of that to an image. I know that nanovega implements all of the rendering parts of that, so I would like to keep using it.
I'd like to create some mechanism that provides a text overlay on top of my 3D scene at certain times (such as when clicking a mouse button for instance.)
I'm going over the tutorials on github and notice things like the THREE.TextGeometry class. Using it I can put 3D text in the scene, but it may be a bit more than I need-- what I'm really after is a way to put some text on, say, a black background, overlay it on the scene, then move it out of the way when done. Does anyone know of good ways to do this in three.js? (If the THREE.TextGeometry class is a good way to do this that's fine, I'm just not sure how to do the overlay bit.)
Use HTML. It's super easy and powerful especially if you just need an overlay. With CSS, you can also achieve things like semi-transparent background. If you want to have it "blend" to scene, i.e. have perspective etc. you can use THREE.CSS3DRenderer which will transform divs based on camera you supply.
I have a sub-classed CAOpenGLLayer class which overrides drawInCGLContext there I draw a rectangle with OpenGL. The CAOpenGLLayer is added to a CALayer and shown.
So when I would like to draw something I would need to do it in drawInCGLContext with this architecture.
What I would like to have is a sort of context used by an other class to draw, animate or render to but will be displayed every time drawInCGLContext occurs.
So basically the only thing my subclass should do is display a remote (OpenGL)context, what's the best way to achieve this? Or should I consider a different approach?
*Not using a CALayer is not an option.
Have you considered using a frame buffer object (FBO)? You can create one which is backed by a texture. Your "remote" drawing class could draw into the FBO, which will cause the drawing to go to the texture that backs it. You can then use that texture elsewhere, like blitting it to the screen in your CAOpenGLLayer subclass. See this link for details of how to use an FBO.
If I wanted to, using Qt, simply have some circles move around in a white box, or a graphic, what would be the best method of this?
Would I need to draw white/the graphic behind where the circle moved from every time? Is there a simple way of accomplishing this in Qt?
Create QGraphicsView object to your widget and added a QGraphicsScene to view.
Add a QGraphicsEllipseItem to scene
Use QPropertyAnimation to change the "pos" property of the ellipse item.
If you need more advanced features, you can build your own animation class on QPropertyAnimation.
enjoy it:)
Update: You can read Qt's Next Generation UI for more information.
Subclass a QWidget. Start a timerEvent() to animate the positions of the circles, then call update() at the end to schedule a repaint of the widget. Override the widget's paintEvent() - in there you draw your background and circles, using a QPainter object. The Qt Assistant has examples of how to use QPainter.
Qt also has a new animation framework that may facilitate something like this, but I have not looked into it.