I'm trying to port a TclTK program I wrote 20 years ago to HTML5.
After hours of frustation, I learned that when you "scale" or
"translate" HTML5's canvas element, it only applies to future
drawings, not items already on the canvas.
This is the opposite of TclTK, where items already on the canvas are
scaled/translated instead.
Short of creating a draw/redraw loop (where I clear the canvas and
redraw all the objects myself when I want to scale/translate), is
there anyway to make HTML5's canvas element behave like TclTK's?
Or am I missing something big?
The Canvas 2D Context is based around pixel-wise image manipulations — it is not a “retained mode” graphics interface as you are apparently familiar with. There literally is no record of your graphics for it to redraw. If you want to change the graphics, you have to redraw them somehow.
Everything is redrawing, in the end (though the redrawing may be hidden from your code), but there are ways to reduce the amount of work you have to do. Here are some options, roughly in order of amount of change you'll have to do to your code (and roughly in order of improved quality/performance):
Draw your graphics on the canvas, then scale and translate the canvas itself using CSS properties (not the width and height attributes of the canvas, which will clear it). This will rescale the image, possibly losing quality, since you're not drawing it anew optimized for the current scale.
Draw your graphics on the canvas, then export them into an ImageData or a data URL, then when needed redraw that onto the canvas. Again, may lose quality.
The above two are essentially kludges to keep using the canvas code you've already written. To get a proper system like you describe TK as being, you want to:
Build your own scene graph: Create a set of objects like Circle, Line, etc. which represent graphics, and containers for those which store transform attributes like scale and position. Then write routines to walk this graph and execute the appropriate drawing commands, whenever you need to redraw.
Use SVG instead. SVG is a language for vector graphics which, in modern browsers, you can embed directly in your HTML, and manipulate in JavaScript just like you would the rest of your page. In SVG, you can simply change a scale attribute and get the change you expect to see.
(The previous option is basically reinventing a small amount of SVG.)
Related
Is it a reasonable optimization to omit calls to ID2D1HwndRenderTarget::DrawBitmap() if the image will end up outside the visible area? If I implement the checking logic in the application that will cost some performance, so if the first thing D2D does is doing the same check then I'd rather not do it.
I had a test with my application which renders some UI part using Direct2D (and attaching renderdoc), seems it is a bit random.
I render a mix of Rectangles, Text, Path geometries (beziers) and Rectangle with a bitmap brush (which should be equivalent to your DrawBitmap call).
Then I capture a frame with all those objects visible, and another one panning my UI (using transform) so objects are not visible.
From there could check what is drawn or not:
Text is always culled
Solid color rectangles are not culled
Most of the times path geometries are culled, but sometimes not.
Rectangle with bitmap brush are NEVER culled
So it seems Direct2D is making different decisions depending on the types of elements you plan to draw.
Since rectangles are easily batched and cheap to draw, it seems that they are just drawn regardless.
Bitmap rectangles and text require more work, so it seems they are effectively culled.
Path geometry was looking to depend on how many polygon the geometry is tesselated to (I had a path that was translating to 26 primitives and it was not culled, another one translating to 120 and it got culled).
So you can either trust Direct2D that it will perform that optimization, but I would personally implement a quick rectangle to rectangle check just in case (it's not gonna hurt your performances as its an extremely simple operation).
I have a mapbox project in production where the street map the user customizes (location, zoom, and text) will ultimately be printed on a surface which has rather small dimensions (3.5" x 2.25" at 600dpi. keeping in mind that the zoom level affects the visibility of the different street types, The problem I am running into is this:
Since the canvas element renders at 72dpi, this means that in order to get an accurate representation of how the map will print, I actually have to make the map's div container real size # 72dpi (252px x 162px) which is of course quite small and far less detailed than the map will look when it's printed at 600dpi
In order to allow people to interact with the map at a reasonable size on the screen, the cheap solution is of course to scale up the canvas using css transforms: i.e. #mapContainer {transform: scale(2.5)}. However this results in a very pixelated map since, unlike svg vector graphics (as seen in the text and graphics overlays in the images below), the browser does not re-render the canvas when it scales up.
Unscaled canvas
Scaled Canvas
I have spent a lot of time searching for a solution to this problem, and at best it looks like I may have to utilize a method where I pull in mapbox data into tiling services like nextzen with data visualization libraries like D3.js but id like to make this one last ditch effort to see if there is any way to trick the browser into rendering this element in a higher size dpi without changing the map bounds or zoom.
I suspect the answer to this lies in a similar vein to this stack overflow question Higher DPI graphics with HTML5 canvas However when I attempt it, I get a null value for var ctx = canvas.getContext('2d') since the mapbox canvas is "webgl" not "2d"... looking into the "webgl" method of resizing a canvas for higher dpi here: https://www.khronos.org/webgl/wiki/HandlingHighDPI but I really am having a hard time understanding how exactly to redraw the canvas after the resize.
Using OpenGL and GLUT, I want to render a scene from two different viewpoints. For the first viewpoint, it is a standard perspective projection using shaders. For the second viewpoint, it is a visualisation of the depth buffer. I want these two images to be contained within the same window, side-by-side.
So far, I have been using GLUT for display. For example, I use:
glutInitWindowSize(1000, 1000);
glutInitWindowPosition(500, 200);
glutCreateWindow("OpenGL Test");
This will draw my scene across the entire window for the one viewport which I have defined. But can I use GLUT to draw two different images from two different viewports, as described above? Or perhaps this is not so easy with just GLUT, and I will need to create a window natively in my operating system (I am using Ubuntu), and then define two different areas in that window which I should draw upon...
Thank you!
GLUT ultimately has nothing to do with it. It creates and manages a window. What you do within that window is entirely up to you.
What you need to do is use the viewport transform. Because the viewport happens after clipping, no primitives outside of the range of the viewport transform will be rendered to (by drawing commands. Buffer clearing will still clear the whole framebuffer). This effectively defines the region of the window that all vertices will lie within.
So you call glViewport, specifying half of the window. Then you render the stuff you want in that half. Then you call glViewport to specify the other half. Then you render the stuff you want there. And then you're done; just swap buffers.
However, this also means that the typical tactic of only calling glViewport in your GLUT resize callback will not work. You must store the window's current size, then use that in your display function.
Two ways you can do this:
You can create a new window with glutCreateWindow(). Note that this will have a different OpenGL context. Also note that it has a return value, an integer.
You can select part of the window using glViewport(), and then call glViewport() again to draw into a different part of the same window.
There is always the option of rendering your two views into a single texture, and then simply making a screen size quad and rendering that texture onto your quad.
I'm not sure its going to satisfy all your needs, but from a visual perspective this should give you the same result.
I am developing a Map App for our school. Our school provide me its own map image and coordinate information. So I want use my map image as the source of map and accord to user's location to show a point in the map image. Can anybody gives me some advice?
Thanks in advance.
There are 2 ways:
It is possible to change the source of the map-tiles (e.g. from Bing to say Nokia or Google) of the Map Control. However, for this to work, it is important that map-tiles source implements mechanisms like quadkeys (e.g. see this). Therefore, to answer your question if you would like to use the Bing Map Control with your school's map so that you can leverage the positioning features of the control, it would require that you have a map-tile server properly designed in order to achieve this. AND, there might be some legal issue with altering the Bing Map control if i am not mistaken.
However, given that you are suggesting an image of the map and then doing positioning, then i would suggest that it can be as easy as you calibrating the pixel X-Y coordinate system on the map with that of the geo-coordinate provided by the geo-watcher. Then, in your code you could do a simple mapping between these 2 systems and then draw something on top of the image. For this part you could use a writeablebitmap or simply use the fact that you can overlay UI controls with silverlight. So, for the latter have a canvas with the an image of the map of your school and then on top of that canvas you can have an <image> representing the device and change its top-left coordinate wrt to the canvas.
So, in summary, as the geo-watcher gives geo x-y coordinates to your code, there is mapping function to the pixel X-Y (which you have pre-calculated) and use that XY to position an overlay <image> or draw some "pin" on a writeablebitmap where you have previously draw the image of the map of your school. Things get complicated with this approach when you want to have zooming as well but, this solution is easily scalable.
Does this help clear things a bit?
Answering 2nd question in comment below:
Yes you can zoom in and out of the canvas but, you would have to program it yourself. The control itself, the canvas does not have this capability. Hence, you would have to recognize the triggers for a zoom action (e.g. clicking on the (+) or (-) buttons or, pinch and stretch gestures) and react to that by re-drawing on the canvas a portion of the region on the canvas so that now that regions stretches over the entire canvas. That is, zooming. For instance for the zoom in case: you would have to determine a geometrical area which corresponds to the zoom factor and is in ratio to the dimensions of the canvas object. Then, you would have to scale that portion up so that edges and empty spaces representing walls and spaces between them grow proportionately. Also, you have to determine the center point of that region which your fix on the canvas so that everything grows away from it. Hence, you would be achieving a appropriate zooming effect. At this point you would have to re-adjust your mapping function of geo-coordinates to pixel XY so that the "pin" or object of interest can be drawn with precision and accurately on the newly rendered surface.
I understand that this can appear quite intensive but, it is straightforward once you appreciate for yourself the mechanics of what is required.
Another easier option could be to use SVG (Scalable Vector Graphics) in a Web-Browser control. Note that you would still require the geo-coordinate to pixel-xy system. However, with this approach you can get the zooming for free with the combination of SVG (which have transformation capabilities for the scale up and down operations) and Web-Browser which enables you to render the SVG and does the gesture handling of zooming in to the map. For that, i believe that the cost of work would be in re-creating the map of your school which is in bitmap to SVG. There are tools like Inkscape which you can use to load the image of your map and then trace the outlines over it. You can then save that outline document as an SVG. In fact, i would recommend this approach to your problem before tackling the Canvas method as i feel that it would be the easiest path for your needs.
I am making a GUI in OpenGL (more specifically lwjgl). I have tried hard to research different ways of doing this but I am having a hard time finding exactly what I want. I do not want to use any external libraries (only ones built in OpenGL, even trying to stay away from using GLUT) and I would like to have it work on anything that supports OpenGL (ex. Frame Buffer Objects don't work on older graphic cards).
I am making a 3D GUI with a scrollable panel as a component. The problem is I don't know how to draw a partial GUI component without doing a lot of calculations to only render part of it. I am making the components out of OpenGL primitives, not textures. I was hoping there is an easy way to do this like use multiple viewports. I don't really even understand what viewports are.
In short: I need to have a scrollable panel as a component overlapping other GUI components (since it will be a drop down menu) and not let any of the components in my panel draw outside my panel.
If you just want to prevent drawing pixels that are outside of a rectangular region (and I think that's what you're asking), than glScissor is exactly what you're looking for.
In lwjgl, you can find the function in org.lwjgl.opengl.GL11.
If you want to scroll a larger scene within a fixed region on the screen, the most straightforward way to go is by just modifying your projection matrix for the scroll position and redrawing the scene. If you are using gluPerspective to set up your projection matrix you'll have to convert it to a direct call to glFrustum; if you're using glOrtho it's much more straightforward.
Keep in mind that "scrolling" a perspective view has no one right way to do things - it depends on what sort of effect you want to achieve, and what particular sort of distortion you want near the edges of the overall viewport.