I have two questions.
How to create a simple window with a canvas using windows API?
What is the most low-level method to write pixels in a canvas? I know method glDrawPixels. Is any other methods which use a smaller call stack or work closer to hardware?
Related
i am making a simple game using Gosu, however i do not want to use external sprites for my game and i would rather draw images using code, is it possible to use code drawn images as if they were external images.
Note: i only want to draw basic shapes, such as squares.
Yes, you can use draw_line, draw_triangle, draw_quad methods of Window class for it.
I need to draw an overlay consisting of lines and text on top of another application. The application in question is a 3D outside world viewpoint, and the overlay is a head up display.
I don't have access to any type of callback from the outside world application to execute draw code in it's draw loop.
Drawing directly over the application's window will result in flickering as the draw loops will not be synchronized, so to me that doesn't seem like an option.
One method I can think of is to capture the outside world application's pixels and stream them into my application, so I can draw the overlay on top in the same draw loop, but that seems very inefficient.
Is there an efficient way to draw over the outside world application without flickering?
Is it possible to draw something over the final graphics card output / at the monitor's refresh rate?
P.s. It doesn't have to be OpenGL, but the HUD is already written in OpenGL so it would make it easier.
To repeat what I said in the comments, I've come across quite a few apps that hijack the 3D api calls and inject their own code to draw stuff right at the end of each frame - steam, teamspeak, mumble. Since it's in the same application there's no flickering and you can draw directly, rather than copying the result somewhere and compositing. I've never done it before and probably won't do a good job explaining it.
A relted question is here: Overlaying on a 3D fullscreen application
I am developing a Map App for our school. Our school provide me its own map image and coordinate information. So I want use my map image as the source of map and accord to user's location to show a point in the map image. Can anybody gives me some advice?
Thanks in advance.
There are 2 ways:
It is possible to change the source of the map-tiles (e.g. from Bing to say Nokia or Google) of the Map Control. However, for this to work, it is important that map-tiles source implements mechanisms like quadkeys (e.g. see this). Therefore, to answer your question if you would like to use the Bing Map Control with your school's map so that you can leverage the positioning features of the control, it would require that you have a map-tile server properly designed in order to achieve this. AND, there might be some legal issue with altering the Bing Map control if i am not mistaken.
However, given that you are suggesting an image of the map and then doing positioning, then i would suggest that it can be as easy as you calibrating the pixel X-Y coordinate system on the map with that of the geo-coordinate provided by the geo-watcher. Then, in your code you could do a simple mapping between these 2 systems and then draw something on top of the image. For this part you could use a writeablebitmap or simply use the fact that you can overlay UI controls with silverlight. So, for the latter have a canvas with the an image of the map of your school and then on top of that canvas you can have an <image> representing the device and change its top-left coordinate wrt to the canvas.
So, in summary, as the geo-watcher gives geo x-y coordinates to your code, there is mapping function to the pixel X-Y (which you have pre-calculated) and use that XY to position an overlay <image> or draw some "pin" on a writeablebitmap where you have previously draw the image of the map of your school. Things get complicated with this approach when you want to have zooming as well but, this solution is easily scalable.
Does this help clear things a bit?
Answering 2nd question in comment below:
Yes you can zoom in and out of the canvas but, you would have to program it yourself. The control itself, the canvas does not have this capability. Hence, you would have to recognize the triggers for a zoom action (e.g. clicking on the (+) or (-) buttons or, pinch and stretch gestures) and react to that by re-drawing on the canvas a portion of the region on the canvas so that now that regions stretches over the entire canvas. That is, zooming. For instance for the zoom in case: you would have to determine a geometrical area which corresponds to the zoom factor and is in ratio to the dimensions of the canvas object. Then, you would have to scale that portion up so that edges and empty spaces representing walls and spaces between them grow proportionately. Also, you have to determine the center point of that region which your fix on the canvas so that everything grows away from it. Hence, you would be achieving a appropriate zooming effect. At this point you would have to re-adjust your mapping function of geo-coordinates to pixel XY so that the "pin" or object of interest can be drawn with precision and accurately on the newly rendered surface.
I understand that this can appear quite intensive but, it is straightforward once you appreciate for yourself the mechanics of what is required.
Another easier option could be to use SVG (Scalable Vector Graphics) in a Web-Browser control. Note that you would still require the geo-coordinate to pixel-xy system. However, with this approach you can get the zooming for free with the combination of SVG (which have transformation capabilities for the scale up and down operations) and Web-Browser which enables you to render the SVG and does the gesture handling of zooming in to the map. For that, i believe that the cost of work would be in re-creating the map of your school which is in bitmap to SVG. There are tools like Inkscape which you can use to load the image of your map and then trace the outlines over it. You can then save that outline document as an SVG. In fact, i would recommend this approach to your problem before tackling the Canvas method as i feel that it would be the easiest path for your needs.
I'm trying to port a TclTK program I wrote 20 years ago to HTML5.
After hours of frustation, I learned that when you "scale" or
"translate" HTML5's canvas element, it only applies to future
drawings, not items already on the canvas.
This is the opposite of TclTK, where items already on the canvas are
scaled/translated instead.
Short of creating a draw/redraw loop (where I clear the canvas and
redraw all the objects myself when I want to scale/translate), is
there anyway to make HTML5's canvas element behave like TclTK's?
Or am I missing something big?
The Canvas 2D Context is based around pixel-wise image manipulations — it is not a “retained mode” graphics interface as you are apparently familiar with. There literally is no record of your graphics for it to redraw. If you want to change the graphics, you have to redraw them somehow.
Everything is redrawing, in the end (though the redrawing may be hidden from your code), but there are ways to reduce the amount of work you have to do. Here are some options, roughly in order of amount of change you'll have to do to your code (and roughly in order of improved quality/performance):
Draw your graphics on the canvas, then scale and translate the canvas itself using CSS properties (not the width and height attributes of the canvas, which will clear it). This will rescale the image, possibly losing quality, since you're not drawing it anew optimized for the current scale.
Draw your graphics on the canvas, then export them into an ImageData or a data URL, then when needed redraw that onto the canvas. Again, may lose quality.
The above two are essentially kludges to keep using the canvas code you've already written. To get a proper system like you describe TK as being, you want to:
Build your own scene graph: Create a set of objects like Circle, Line, etc. which represent graphics, and containers for those which store transform attributes like scale and position. Then write routines to walk this graph and execute the appropriate drawing commands, whenever you need to redraw.
Use SVG instead. SVG is a language for vector graphics which, in modern browsers, you can embed directly in your HTML, and manipulate in JavaScript just like you would the rest of your page. In SVG, you can simply change a scale attribute and get the change you expect to see.
(The previous option is basically reinventing a small amount of SVG.)
I am using LibGDX for a small app project, and I need to somehow take a series of sprites and place them (or their pixels rather) into a Pixmap. The basic idea is to take random sprites that are generated through various means while the app is running, and, only at specific times, merge some of them onto a single background sprite.
I believe that most of this can be done easily, but the step of getting the sprite images into the Pixmap isn't quite so obvious to me. The sprites also have various transparent and semi-transparent pixels, so simply grabbing the color at each pixel while it is all on the same screen isn't really applicable either, as it obviously shouldn't take the background colors with it.
If there is a suitable alternative to this that would accomplish what I am looking for I would also love to hear it. Any help is highly appreciated.
I think you want to render your sprites to an off-screen buffer (called an "FBO" or FrameBuffer in libgdx) (blending them as they're added), and then render that offscreen buffer to the screen as a single draw call? If so, this question should help: libgdx SpriteBatch render to texture
This requires OpenGL ES 2.0, which will eliminate support for some older devices.