Robot-Relative Movement - sphero-api

By default, Sphero movement commands are user-relative, set by the initial orientation calibration (blue tail light).
I have added FPV hardware to one of my Spheros and wish to issue robot-relative movement commands. Seems like this should actually be easier, but from what I can see, I may have to instead issue direct motor control commands, defeating stabilization and other great features.
Is there a way of changing modes or a command set that offers simple robot-relative navigation?

Related

macOS scan/detect screen in real time

There are some apps which can scan the screen and detect things in real time. For example, the macOS preinstalled app "Digital Color Meter". I can move the cursor and the app detects immediately which color is in the area around my cursor. So my question is, how can I do things like this? How can I "scan" the screen and detect objects or colors in a selected area in real time? I can't find a solution.
Digital Color Meter only captures a small square of the screen. If that's all you need, try CGDisplayCreateImageForRect and see if it's fast enough.
If that's not fast enough, look at the CGDisplayStream functions, starting with CGDisplayStreamCreate or CGDisplayStreamCreateWithDispatchQueue. These functions are significantly more complicated than CGDisplayCreateImageForRect and you'll have to learn about IOSurfaceRef to get at the pixel data from a CGDisplayStream.

Applying post-effect / pixel shader to Windows

I am color blind. This is usually not a problem for me, but whenever a game or application uses red and green as contrast colors it can become an annoyance. Looking for accessibility tools online, all I've managed to find are tools that adjust colors on snapshots, or on a camera input. For this reason, I've been toying with the idea of writing my own color adjustment tool.
Suppose I would want to write my own shader or post effect that shifts or swaps color values, then apply it to everything I see in Windows (10) in real-time - is there a way to do this? For example, is it possible to insert a pixel shader into the Windows rendering pipeline? Or would it be possible to write an application that takes the entire screen as input and outputs something else, faster than say 5ms per frame using a modern GPU? How would you approach this?

Creating a small mouse pendant under windows

I am not really a windows-programmer, but I am somewhat experienced with scripting languages under linux, like PHP and Python, so I know programming basics.
I would like to write a small sprite/pointer following the mouse movement. The reason is the following:
I am running Linux and I have Windows7 in VirtualBox. Virtualbox has an unfixed bug since a long time, that it is not able to draw the mousepointer on OpenGL viewports, when the mouse is captured. If VBox has mouse integration enabled, it draws the guest's mouse on OpenGL-Views but manipulating the view using mouse (like movements as tilt and pan in 3d-applications) is almost impossible because it moves unpredictable as if one has set the mouse speed to insane high level and it's impossible to steer the view. That happens even when the pointer speed is set really low on the host and the guest. If the mouse is captured, manipulating the view using the mouse is possible - but one does not see a pointer at all. A search on google shows, that the bug had been reported a few times for some years already, so there are some facing the problem and it's not to be fixed quickly.
tl;dr: It's impossible to use the mouse in many OpenGL-Applications running in VirtualBox.
So I had the idea, that maybe a small pointer like an arrow or a cross would follow the actual system's pointer being always in foreground, indicating where the mouse is when it is not visible due to the VBox bug. So it should be something else than the system's mouse sprite, but an image just drawn always in front.
Can someone please point me to some resources, that could teach me how to write such a small toy using C++ or C#?
Thanks

A skinning engine in Windows: draw “dirty” regions only or the whole window at once?

I want to make a skinning engine capable of drawing custom-shaped windows with alpha blending. That is, it'll use layered windows (UpdateLayeredWindow). A typical window will contain among its background a couple dozens of other bitmaps ranging from 10×10 to, say, 300×150 pixels. In the worst case most of these elements will have smooth animation up to 30 fps. Everything will be alpha-blended and I am going to use Direct2D for this (yes, I know older Windows versions doesn't support it). In general, Winamp's modern skin engine is the closest example.
Given all this and taking in account modern PCs performance, can I just redraw the whole window every single frame or do I have to constrain to some sort of clip rectangle?
D2D required you to render with WM_Paint messages
Honneslty, use The IAnimation interface, and just let D2D and windows worry about how often to redraw , though i will let you know , winamp is done with adobe air, and layerd windows with d2d causes issues. (Kinda think you have to use a DXGI render target, but with the window being layerd it needs a DC to be returned to an end paint call so it can update it's alpha channel)
I have some experience with this.
If you need to support Windows XP, using UpdateLayeredWindow is the only choice available for solving this problem. The documentation for this call says it copies the whole bitmap to the screen each time it is called and this bottleneck showed up in my benchmarking as the real limiting factor. If your window is 300x300 you pay that price on every update, even if you are careful to modify only a couple of pixels. It would be very easy to over-optimize the rendering side for no real benefit so implement something simple, measure, and then decide if you need to optimize.
If you can drop support for Windows XP then you can avoid UpdateLayeredWindow completely and use DwmExtendFrameIntoClientArea to create the same effect as a layered window. You'll write less code, avoid the UpdateLayeredWindow bottleneck, and D2D will be easier to work with.

Is it possible to create full screen color overlay effects in windows?

I remember my old Radeon graphics drivers which had a number of overlay effects or color filters (whatever they are called) that would render the screen in e.g. sepia tones or negative colors. My current NVIDIA card does not seem to have such a function so I wondered if it is possible to make my own for Vista.
I don't know if there is some way to hook into window's rendering engine or, alternatively, into NVIDIA's drivers to achieve this effect. While it would be cool to just be able to modify the color, it would be even better to modify the color based on its screen coordinates or perform other more varied functions. An example would be colors which are more desaturated the longer they are from the center of the screen.
I don't have a specific use scenario so I cannot provide much more information. Basically, I'm just curious if there is anything to work with in this area.
You could have a full-screen layered window on top of everything and passing through click events.. However that's hacky and slow compared to what could be done by getting a hook in the WDM renderer's DirectX context. However, so far it's not possible, as Microsoft does not provide any public interface into this.
The Flip 3D utility does this, though, but even there that functionality is not in the program, it's in the WDM DLL, called by ordinal (hidden/undocumented function, obviously, since it doesn't serve any other purpose). So pretty much another dead end, from where I haven't bothered to dig deeper.
On that front, the best we can do is wait for some kind of official API.

Resources