MIT App Inventor 2 - Why use speed? - app-inventor

In my simple school project, there is a character that moves using buttons. When deciding how I wanted to move a sprite I stumbled upon this dilemma, why use speed? They look pretty much identical to me so far.
Here is what I mean (the top MoveTo block and the bottom one seem the same)
of course given i set the character.speed to 10 in screen init.
What would I benefit by replacing the simple integer value with character.speed?

You can set the heading and the speed of an image sprite and the image sprite will move automatically into the direction defied by heading. Alternatively use the MoveTo method to set the image sprite to a defined x/y coordinate.
See also the documentation
Heading
Returns the sprite's heading in degrees above the positive x-axis. Zero degrees is toward the right of the screen; 90 degrees is toward
the top of the screen.
Interval
The interval in milliseconds at which the sprite's position is updated. For example, if the interval is 50 and the speed is 10, then
the sprite will move 10 pixels every 50 milliseconds.
Speed
The speed at which the sprite moves. The sprite moves this many pixels every interval.

Related

libGDX particle effect: is it possible to reverse it?

I'm working with libGDX particle effect system, and I have a following need: to reverse an effect in time. These effects look like scattering particles, appearing in a single spot, and what I need is exactly the opposite - particles appearing in random places around the central spot and gathering in the central spot. It all is the sprite animation, and it could be reversed, but how?
I think you can because the particle editor allows negative velocity plus I've done it in the particle editor.
1 set the velocity to -300 low to +300 high (or whichever)
2 particle size is zero to begin with (invisible), this allows the particle to move away from origin at appear at a radius
3 at the desired radiues, when you arrive, restore the size (become visible).
4 this point is when the velocity reverses heading back to the origin.
The point is that negative or positive velocity both head away from the origin, but as you interpolate between the two you reverse the course. As long as you are invisible (make the particle 0 size) then you appear at the radius and travel inwards.

Subdividing a curve with points equal to distance between points

In three.js I have pushed some Vector3() coordinates into a variable, and I perform THREE.CatmullRomCurve3(points).getPoints(points*32); to get a curve. If I extrude a shape along it, it all looks fine.
The problem I am having now is that I want the camera to run along this curve, but the speed changes depending on how far it is from one (original) point to another. According to one of the examples from threejs.org, I loop through each of the new points and set the camera's position to those points. But if two points are 1 meters apart, or 2 meters apart, it is still 32 subpoints between each, and the camera will reach the end in the exact same time.
How can I do so that a 1 meter stretch gets 32 subpoints, and a 2 meter stretch gets 64 subpoints, and so the camera will take twice as long to complete the 2 meter stretch compared to the 1 meter stretch?
The problem I am having now is that I want the camera to run along this curve, but the speed changes depending on how far it is from one (original) point to another.
If you use Curve.getSpacedPoints() instead of Curve.getPoints(), you will receive equi-spaced points along your curve.
three.js R106

How can I render my 2D background under the water with opengles

I am rendering my 2D background under the water with opengles. How can I distort my textures over time? I just know to achive this with sin(time) or cos(time). But I'm poor in glsl.I have no idea how to do it. Shoud I changed the x,y coordination over time? How can I avoid move the whole texture repeatedly?
Any help thanks.
You may distort the texture coordinates in hope of achieving this but you will need a few parameters.
For instance you can use a sin or cos function (not much of a difference between them) to distort horizontally by moving the X texture coordinate by a small amount. So you insert for instance an uniform (strength) which should be relative to the texture for instance .1 will distort for a maximum of 10%. Then the idea would be to set X=sin(Y)*strength. Since the Y is in range from 0 to 1 you will need to add another parameter such as density to get "more waves" which should be in range like 20 for instance to get a few waves (change this as you please to test for a nice effect). So then the equation becomes X=sin(Y*density)*strength. Still this will produce a static distorted image but what you want is to move over time so you need some vertical time factor delta which should be changed over time and range between .0 and 2*PI and then the equation is X=sin(Y*density + delta)*strength. On every frame you should increase the delta and if it is larger then 2*PI simply decrease it by 2*PI to get a smooth animation. The value you increase the delta by will control the speed of the effect.
So now you have 3 uniform parameters which you should try to play around to get the desired effect. I hope you find it.

Correct Translation for artificial horizon

I would like to draw an artificial horizon. The center of the view would represent perfectly horizontal view with roll rotating the horizontal line and pitch moving it up or down.
The question is: what is the correct calculation to translate the horizon line up or down (pitch) given the pitch angle.
My guess is that this would probably depend on the FOV angle that one would assume for an assumed camera, so this angle would need to be a factor in the algorithm sought. Ideally I would figure out this angle for the iPhone/iPad camera so that the artificial horizon would line up with the actual horizon if you hold the device in front of you and look towards the horizon.
Until now I've been guesstimating the offset, but I would like to have the exact formula.
Try horizon_offset/(screen_height/2)=tan(pitch)/tan(vertical_FOV/2).
Look at the picture, and the formula derives itself.
(source: zwibbler.com)
.
Update I have two angles mixed up. One is the FOV angle of the camera, the other is the viewing angle of the screen. These are two different things. The latter depends on the viewing distance. You probably have to estimate this distance, and adjust magnification and/or focal distance such that objects visible on the screen are the same angular size as the same objects visible with the naked eye. (With my particular phone, you would need to magnify the image by an additional factor of about 3 after the 5x zoom, if the user stretches his hand with the phone all the way forward). Then the two angles are the same, and the formula works.
If you want to introduce magnification (i.e. objects on the screen have different sizes from their real-life counterparts), multiply the horizon offset by the magnification factor.
Update 2 When taking the viewing distance into account, the screen size cancels out, and the offset simply becomes viewing_distance*tan(pitch_angle) (with unit magnification).

How to get a 1 pixel line with NSBezierPath?

I'm developing a custom control. One of the requirements is to draw lines. Although this works, I noticed that my 1 pixel wide lines do not really look like 1 pixel wide lines - I know, they're not really pixels but you know what I mean. They look more like two or three pixels wide. This becomes very apparent when I draw a dashed line with a 1 pixel dash and a 2 pixel gap. The 1 pixel dashes actually look like tiny lines in stead of dots.
I've read the Cocoa Drawing documentation and although Apple mentions the setLineWidth method, changing the line width to values smaller than 1.0 will only make the line look more vague and not thinner.
So, I suspect there's something else influencing the way my lines look.
Any ideas?
Bezier paths are drawn centered on their path, so if you draw a 1 pixel wide path along the X-coordinate, the line actually draws along Y-coordinates { -0.5, 0.5 } The solution is usually to offset the coordinate by 0.5 so that the line is not drawn in the sub pixel boundaries. You should be able to shift your bounding box by 0.5 to get sharper drawing behavior.
Francis McGrew already gave the right answer, but since I did a presentation on this once, I thought I'd add some pictures.
The problem here is that coordinates in Quartz lie at the intersections between pixels. This is fine when filling a rectangle, because every pixel that lies inside the coordinates gets filled. But lines are technically (mathematically!) invisible. To draw them, Quartz has to actually draw a rectangle with the given line width. This rectangle is centered over the coordinates:
So when you ask Quartz to stroke a rectangle with integral coordinates, it has the problem that it can only draw whole pixels. But here you see that we have half pixels. So what it does is it averages the color. For a 50% black (the line color) and 50% white (the background) line, it simply draws each pixel in grey:
This is where your washed-out drawings come from. The fix is now obvious: Don't draw between pixels, and you achieve that by moving your points by half a pixel, so your coordinate is centered over the desired pixel:
Now of course just offsetting may not be what you wanted. Because if you compare the filled variant to the stroked one, the stroke is one pixel larger towards the lower right. If you're e.g. clipping to the rectangle, this will cut off the lower right:
Since people usually expect the rectangle to stroke inside the specified rectangle, what you usually do is that you offset by 0.5 towards the center, so the lower right effectively moves up one pixel. Alternately, many drawing apps offset by 0.5 away from the center, to avoid overlap between the border and the fill (which can look odd when you're drawing with transparency).
Note that this only holds true for 1x screens. 2x Retina screens actually exhibit this problem differently, because each of the pixels below is actually drawn by 4 Retina pixels, which means they can actually draw the half-pixels. However, you still have the same problem if you want a sharp 0.5pt line. Also, since Apple may in the future introduce other Retina screens where e.g. every pixel is made up of 9 Retina pixels (3x), or whatever, you should really not rely on this. Instead, there are now API calls to convert rectangles to "backing aligned", which does this for you, no matter whether you're running 1x, 2x, or a fictitious 3x.
PS - Since I went to the hassle of writing this all up, I've put this up on my web site: http://orangejuiceliberationfront.com/are-your-rectangles-blurry-pale-and-have-rounded-corners/ where I'll update and revise this description and add more images.
The answer is (buried) in the Apple Docs:
"To avoid antialiasing when you draw a one-point-wide horizontal or vertical line, if the line is an odd number of pixels in width, you must offset the position by 0.5 points to either side of a whole-numbered position"
Hidden in Drawing and Printing Guide for iOS: iOS Drawing Concepts, though nothing that specific to be found in the current, standard (OS X) Cocoa Drawing Guide..
As for the effects of invoking setDefaultLineWidth: the docs also state that:
"A width of 0 is interpreted as the thinnest line that can be rendered on a particular device. The actual rendered line width may vary from the specified width by as much as 2 device pixels, depending on the position of the line with respect to the pixel grid and the current anti-aliasing settings. The width of the line may also be affected by scaling factors specified in the current transformation matrix of the active graphics context."
I found some info suggesting that this is caused by anti aliasing. Turning anti aliasing off temporarily is easy:
[[NSGraphicsContext currentContext] setShouldAntialias: NO];
This gives a crisp, 1 pixel line. After drawing just switch it on again.
I tried the solution suggested by Francis McGrew by offsetting the x coordinate with 0.5, however that did not make any difference to the appearance of my line.
EDIT:
To be more specific, I changed x and y coordinates individually and together with an offset of 0.5.
EDIT 2:
I must have done something wrong, as changing the coordinates with an offset of 0.5 actually does work. The end result is better than the one obtained by switching off the anti aliasing so I'll make Francis MsGrew's answer the accepted answer.

Resources