Convert an Adobe Illustrator vector image to Open GL - opengl-es

How can I take a vector produced by Adobe Illustrator and get the points required to reproduce the image in Open GL calls?
What I'm wanting to do is to take some of my 2 dimensional artwork, and directly produce it on my programs. This will allow me to support any resolution I might require, while still looking crisp, and probably save some memory as well. I just can't figure out how to make it happen. Ultimately, this will go towards Android programming.

OpenGL is not a very good API to draw 2D vector graphics illustrations. Those usually contain a lot of curved patches (Bezier and/or NURBS), which must be broken down into triangles first, before they can be drawn using OpenGL.
There's another API, called OpenVG has been specially crafted to support drawing 2D vector illustrations. There's OpenGL interaction supported by some OpenVG implementations. And some OpenVG implementations do use OpenGL as their backend.
Another option and the more viable at this time, is using a vector graphics drawing library, that uses OpenGL as backend. One of those libraries is Cairo, which also provides a reader for SVG files.
Adobe Illustrator (.ai) is a propriatary format, so I'd rather not use it. However Illustrator can export to SVG just fine, and Cairo does read and draw SVG filed generated by Illustrator just fine.

Use Inkscape's command line to --export-plain-svg on the Illustrator file. It's not a perfect conversion (particularly fonts) but it gets the job done.
Anyone know if Cairo has something similar to --export-plain-svg since Inkscape is based off of it and it's so low-level (faster)?

Related

2D graphics without OpenGL / DirectX as for a GUI Toolkit

I am wondering how to create some 2D graphics without using OpenGL or DirectX. Like, what do e.g. Qt or GTK use, to draw what is basically colored rectangles (and text)?
I know that with KDE 5 and Gnome 3 there were some complains that now OpenGL is required (for certain effects including 3D stuff like the desktop cube that was trendy for a while). So apparently something simpler was used before, yet I can't find out what. Here the answers are basically: OpenGL or SDL …
Well the most basic way you have to draw a window on linux is to use Xlib or Win32 on windows. These are very basic window drawing APIs that also handle events. But it would probably be a lot of work to use them on their own.
SDL, SFML, or OpenGL are probably better options in most cases, since window rendering protocols can draw rectangles and images but lack a lot of QoL features that make your life as a dev easier. Maybe if you are looking for the absolute best performance Xlib (or wayland) would be the way to go, but if you are looking for a simple way to code a GUI application it's probably a bad idea.
If you want a great and easy to use GuI to do menus and stuffs, dear ImGui is very impressive and easy to use, and can run in a variety of rendering surfaces including SDL and DirectX
Also this answer could help you, it's seems a bit close :
Does OpenGL use Xlib to draw windows and render things, or is it the other way around?
You'll notice that at the end they talk of other ways to draw windows which are AGG and Cairo. It's a bit of a wall text but a greatly detailed answer.

Opengl Combine texture side by side

I'm currently using the Freetype library to render text to my opengl es 2.0 app (both ios and android) and I've come to the point where I seriously have to consider performance. I have a store of textures (one for each character) and whenever I render text, the character calls from that texture area and then opengl binds to it, renders it to the screen... For each letter! Very inefficient. So, I'm looking into ways to make this faster, and one of the ideas I had was to pre render each word instead of each letter. So, now for the big question: How do I combine textures side by side? I know I need to create a buffer and that somehow it will come from FT_LOAD_CHAR, but I don't really know much else.
Actually, you have misunderstood how to use FreeType. You can load font using FT library and load all characters in to a single bitmap texture. Then you can just use texture coordinates to point the right UV.
This link can help to some extend. Draw FreeType Font in OpenGL ES

Procedurally generated GUI

I've developed an interactive audio visualization engine. I need to make its GUI scalable to various screen sizes with various PPIs (this includes both very large screens and mobile devices). Designer simply sent me a PSD with graphical representation of supported widgets. I'm exporting these into PNGs. The problem is that those bitmaps are of course not scalable and looks ugly.
I've thought about several ways how to achieve resolution and PPI independent GUI:
Export PNGs with various sizes and select the current set on runtime (waste of space simply for storing bitmaps in various resolutions)
Use scale 9 images only (no fancy stuff)
Use SVG (not supported by rendering APIs, could use smth like nanovg for OpenGL but what to do with raw framebuffer then?, also performance problems and too much complexity for what I need)
I came to an idea to pregenerate bitmaps at runtime for specific device once and use them afterwards. Are there any specific libraries for that and maybe already available themes which I could employ for now? I imagine tool could work similarly to how cairo graphics library or javascript canvas work by reading command list and outputting a bitmap. Any other ideas?
One possible solution is this:
CPlayer is a procedural graphics player with an IMGUI toolkit. It can
be used for anything from quick demos, prototyping graphics apps, to
full-fledged apps and games.
http://luapower.com/cplayer.html

OpenGL Render to texture

I know this has been asked before (I did search) but I promise you this one is different.
I am making an app for Mac OS X Mountain Lion, but I need to add a little bit of a bloom effect. I need to render the entire scene to a texture the size of the screen, reduce the size of the texture, pass it through a pixel buffer, then use it as a texture for a quad.
I ask this again because a few of the usual techniques do not seem to function. I cannot use the #version, layout, or out in my fragment shader, as they do not compile. If I just use gl_FragColor as normal, I get random pieces of the screen behind my app rather than the scene I am trying to render. The documentation doesn't say anything about such things.
So, basically, how can I render to a texture properly with the Mac implementation of OpenGL? Do you need to use extensions to do this?
I use the code from here
Rendering to a texture is best done using FBOs, which let you render directly into the texture. If your hardware/driver doesn't support OpenGL 3+, you will have to use the FBO functionality through the ARB_framebuffer_object core extension or the EXT_framebuffer_object extension.
If FBOs are not supported at all, you will either have to resort to a simple glCopyTexSubImage2D (which involves a copy though, even if just GPU-GPU) or use the more flexible but rather intricate (and deprecated) PBuffers.
This tutorial on FBOs provides a simple example for rendering to a texture and using this texture for rendering afterwards. Since the question lacks specific information about the particular problems you encountered with your approach, those rather general googlable pointers to the usual render-to-texture resources need to suffice for now.

OpenVG and WebGL

Is there a JavaScript implementation of the OpenVG standard based on WebGL?
I'm pretty aware that we can render vector graphics in the browser, I'm just curious as to whether or not anyone has actually managed to render SVG with WebGL, with or without basing this on the OpenVG standard. If it doesn't exist, would it be useful to start a project?
Here's some C code for parsing and rendering SVG in OpenGL 1.x: https://github.com/tnovelli/vsprite. It might not be quite what you're looking for. We used a Stencil Buffer trick to draw 2D polygons. For 3D, I guess you'd have to render into an off-screen buffer and texture-map it onto a 3D object. (Why not pre-render SVG into raster images? Because our objects are breakable and deformable.)
I'm thinking about porting this to Javascript+WebGL. The browser's XML/SVG/CSS parsing features should eliminate a lot of the work, but the stencil trick could pose a challenge. This is a back burner project for me, so if anyone else wants to try something, don't hold your breath, just do it! :)
Well none of these answers answer the question explicitly. #tom shows that yes, we can render vector graphics on the webgl canvas using a neat stencil trick. This isn't however a full implementation of the OpenVG specification and I am curious as to how much of the OpenVG spec could be implemented this way. So to conclude:
There are no implementations currently, but there doesn't appear to be a particular demand for it.
As WebGL mixes just fine with any other HTML(5) technique, you can mix it up already. The only thing is that you won't be able to mix SVG with WebGL using its depth buffer - for example a logotype (SVG) placed in a 3D-world (WebGL). But maybe that's just what you like to do?

Resources