Small Basic - GetPixel() always returns #000000 with Turtle - getpixel

I have yet to see this question be asked or answerd anywhere, so I thought I should try here. Not being that familier with Small Basic's language logic, i'm trying to just create my own little programs to learn the language better. However, recently I have tried to use the GetPixel() functionality that works well with a regular drawline in the graphics window. But when I draw a line using the Turtle, then try to use GetPixel, it returns #000000 (Black), when it should return #0000FF (Blue).
Does anyone know if GetPixel works when drawing with the Turtle? I thought it would as they share the same space in the graphics window.

As explains in this article http://social.technet.microsoft.com/wiki/contents/articles/29753.small-basic-pixel.aspx the GetPixel() read on the "Drawing" layer. The layers are explains here : http://social.technet.microsoft.com/wiki/contents/articles/15059.small-basic-graphicswindow-basics.aspx.
Unfortunally the Turtle draw on top off all layers, so the GetPixel() can't read the pixel draw by the turtle.

Related

Creating frosted glass three webgl

I'm having trouble to find how to create a material with the look of frosted glass. I haven't found anything on the web that looks what I want to do.
I've tried a lot of settings for the material.
In this link you can see what I'm trying to get..
Does anybody have an idea how to solve this?
Regards
Rikard
One way I've encountered that worked well for me in the past performed a Blit on the portion of the framebuffer you want frosted with the blur algo or normal pattern of your choice. A stencil mask as part of the glass shader is used to determine which portion should be affected and which should not.
This article has a nice writeup on glass refraction which, when used with a blur will give a good effect.
https://beclamide.medium.com/advanced-realtime-glass-refraction-simulation-with-webgl-71bdce7ab825
I know It's not WebGL per se, but I've used the below Unity frosted glass shader before, to great effect. You may be able to extract the pertinent pieces from it and use that knowledge to assemble a WebGL version. https://github.com/andydbc/unity-frosted-glass
I'm about to undertake this myself, and will update this answer with actual code 'if' I succeed.

I don't fully understand D2D1_FIGURE_BEGIN: why is it needed, what's the difference, and why does Microsoft's sample code mismatch types anyway?

I'm reading up on Direct2D before I migrate my GDI code to it, and I'm trying to figure out how paths work. I understand most of the work involved with geometries and geometry sinks, but there's one thing I don't understand: the D2D1_FIGURE_BEGIN type and its parameter to BeginFigure().
First, why is this value even needed? Why does a geometry need to know if it's filled or hollow ahead of time? I don't know nay other drawing API which cares about whether path objects are filled or not ahead of time; you just define the endpoints of the shapes and then call fill() or stroke() to draw your path, so how are geometries any different?
And if this parameter is necessary, how does choosing one value over the other affect the shapes I draw in?
Finally, if I understand the usage of this enumeration correctly, you're supposed to only use filled paths with FillGeometry() and hollow paths with DrawGeometry(). However, the hourglass example here and cited by several method documentation pages (like the BeginFigure() one) creates a filled figure and draws it with both DrawGeometry() and FillGeometry()! Is this undefined behavior? Does it have anything to do with the blue border around the gradient in the example picture, which I don't see anywhere in the code?
Thanks.
EDIT Okay I think I understand what's going on with the gradient's weird outline: the gradient is also transitioning alpha values, and the fill is overlapping the stroke because the stroke is centered on the line, and the fill is drawn after the stroke. That still doesn't explain why I can fill and stroke with a filled geometry, or what the difference between hollow and filled geometries are...
Also I just realized that hollow geometries are documented as not having bounds. Does this mean that hollow geometries are purely an optimization for stroke-only geometries and otherwise behave identically to a filled geometry?
If you want to better understand Direct2D's geometry system, I recommend studying the WPF geometry system. WPF, XPS, Direct2D, Silverlight, and the newer "XAML" frameworks all use the same building blocks (the same "language", if you will). I found it easier to understand the declarative object-oriented API in WPF, and after that it was a breeze to work with the imperative API in Direct2D. You can think of WPF's mutable geometry system as an implementation of the "builder" pattern from Java, where the build() method is behind the scenes (hidden from you) and spits out an immutable Direct2D geometry when it comes time to render things on-screen (WPF uses something called "MIL", which IIRC/AFAICT, Direct2D was forked from. They really are the same thing!) It is also straightforward to write code that converts between the two representations, e.g. walking a WPF PathGeometry and streaming it into a Direct2D geometry sink, and you can also use ID2D1PathGeometry::Stream and a custom ID2D1GeometrySink implementation to reconstitute a WPF PathGeometry.
(BTW this is not theoretical :) It's exactly what I do in Paint.NET 4.0+: I use a WPF-esque declarative, mutable object model that spits out immutable Direct2D geometries at render time. It works really well!)
Okay, anyway, to get directly to your specific question: BeginFigure() and D2D1_FIGURE_BEGIN map directly to the PathFigure.IsFilled property in WPF. In order to get an intuitive understanding of what effect this has, you can use something like KaXAML to play around with some geometries from WPF or Silverlight samples and see what the results look like. And the documentation is definitely better for WPF and Silverlight than for Direct2D.
Another key concept is that DrawGeometry is basically a helper method. You can accomplish the same thing by first widening your geometry with ID2D1Geometry::Widen and then using FillGeometry ("widening" seems like a misnomer to me, btw: in Photoshop or Illustrator you'd probably use a verb like "stroke"). That's not to say that either one always performs better/worse ... be sure to benchmark. I've seen it go both ways. The reason you can think of this as a helper method is dependent on the fact that the lowest level of the rasterization engine can only do one thing: fill a triangle. All other drawing "primitives" must be converted to triangle lists or strips (this is also why ID2D1Mesh is so fast: it bypasses all sorts of processing code!). Filling a geometry requires tessellation of its interior to a list of triangle strips which can then be filled by Direct3D. "Drawing" a geometry requires applying a stroke (width and/or style): even a simple 1-pixel wide straight line must be first converted to 2 filled triangles.
Oh, also, if you want to compute the "real" bounds of a geometry with hollow figures, use ID2D1Geometry::GetWidenedBounds with a strokeWidth of zero. This is a discrepancy between Direct2D and WPF that puzzles me. Geometry.Bounds (in WPF) is equivalent to ID2D1Geometry::GetWidenedBounds(0.0f).

Is there a way to add an outline in scene kit?

I've been making a game in scene kit, but the edges of objects are difficult to see, making some of the games details impossible to see. Is there a way to make a black outline around all the game objects?
you could use an SCNTechnique as mentioned in another answer (you can have a look at this article about cel shading, which has an edge-detection pass) but full-frame post-processes are quite expensive.
On OS X you can also leverage geometry shaders (see this article). But it's not available on iOS and might be harder to implement and get right.
I would go with a much easier technique, which only involves vertex and fragment shaders. You can take a look at this article, which gives an example that's easy to re-create in SceneKit using SCNProgram or shader modifiers.
There is an example of making a glowing outline for nodes that uses SCNTechnique here:
https://github.com/laanlabs/SCNTechniqueGlow
You could modify the color and blur method to achieve an stroked outline effect.
Another SCNTechnique example, as referenced here: https://www.nurfacegames.com/everything-you-wanted-to-know-about-outline-shaders/, is to render your node slightly larger behind then again in front at regular size.
Here's a playground example of that: https://github.com/mackhowell/scenekit-outline-shader-scntechnique.

Drawing atop a scrollable, zoomable image in Qt

I'm sorry if my question is somewhat vague. It's been a few years since I did anything with Qt, and back then I never did any fancy image stuff. What I'm asking for below is just some general suggestions on which classes to consider using. I'm trying to avoid barking up the wrong tree from the very start.
The situation: I'm writing a Qt-based program in which I need to display a somewhat large (let's say 5000x5000) raster image. The user should be able to zoom (quickly) in and out, and pan around the image in a way similar to for example Google maps. So far, this is not very different from the Qt ImageViewer example, except perhaps for the requirement that zooming happens quickly. However, I need to draw on the order of 50k simple geometric shapes (let's say circles) on top of the image, and be able to add and remove some of these in a simple way. The circles should have the same size no matter the zoom level, and should thus either be redrawn whenever the user zooms, or should be drawn with vector graphics. Think of the circles as map annotations. These should look the same at any zoom level, and also behave nicely with respect to panning.
I guess my question is twofold:
Can Qt draw vector graphics on top of a raster image?
In general, which classes should I consider for the above?
Thanks in advance. I don't like answering vague questions myself, but maybe someone with experience with Qt's graphics capabilities has an answer.
I suggest you use QGraphicsView and friends for this. It helps handling all the view/world transformation and the vector items can be achieved with various QGraphicsItems.
You can change the sizes of the items whenever the zoom level changes to maintain constant apparent sizes.

What kind of cool graphics algorithms can I implement?

I'm going to program a fancy (animated) about-box for an app I'm working on. Since this is where programmers are often allowed to shine and play with code, I'm eager to find out what kind of cool algorithms the community has implemented.
The algorithms can be animated fractals, sine blobs, flames, smoke, particle systems etc.
However, a few natural constraints come to mind: It should be possible to implement the algorithm in virtually any language. Thus advanced directx code or XNA code that utilizes libraries that aren't accessible in most languages should not be posted. 3D is most welcome, but it shouldn't rely on lots of extra installs.
If you could post an image along with your code effect, it would be awesome.
Here's an example of a cool about box with an animated 3D figure and some animated sine blobs on the titlebar:
And here's an image of the about box used in Winamp, complete with 3D animations:
I tested and ran the code on this page. It produces an old-school 2D flame effect. Even when I ran it on an N270 in HD fullscreen it seemed to work fine with no lag. The code and all source is posted on the given webpage.
Metaballs is another possibly interesting approach. They define an energy field around a blob and will melt two shapes together when they are close enough. A link to an article can be found here.
Something called a Wolfram Worm seems so be an awesome project to attempt. It would be easy to calculate random smooth movement by using movement along two connected bezier curves. Loads of awesome demos can be found on this page:
http://levitated.net/daily/index.html
(source: levitated.net)
I like a lot the Julia 4D quaternion fractal.
(source: macromedia.com)
Video: Julia 4D animation in F#

Resources