Visual Basic 2010: Fixing 'Choppy' Graphics? - visual-studio-2010

My basic issue lies in that the images I am drawing will 'flicker', often. My program flow works exactly as I wish apart from this, and is as follows:
Uses G.Clear, to Clear all previous screen instances.
Draws entities/shapes.
Performs functions for positions of next entities, and Restarts the flow.
(This essentially shows a pendulum in motion, which unfortunately flickers strongly.)
I believe the issue lies purely in G.Clear causing the choppiness.
I apologize if this isn't concise enough, and thanks in advance to anyone who can help me out here.

Try switching doublebuffering on.

Related

Looking for a reasonable way to have several shapes as usable buttons in processing

I recently (5 weeks ago) started my first school year in an school based apprenticeship to become an IT assistant.
We're learning programming and are starting with very basic processing things, while the ultimate plan is to get into C#.
Now I understand that processing might not be the best language for my little project but I still would like to work this out somehow.
What I want to build is a "Stargate Dial Computer". If you know the TV Show you'll know what I'm talking about.
I wanted to make it visually appealing so I decided to use one of the available tools to create my shapes as I am using a DHD (term from the show) for the dial process - see picture: https://i.imgur.com/r7jBjRG.png
This small shape setup already is over 500 lines of code and that seems unwise in itself. Besides that, the plan is to have every single of these trapezoids be a pushable button - but to achieve that manually I'd have to check their coordinates against the mouse collision to utilize them as buttons.
What I'm asking for now is any input on how to work with these shapes in a logical way to make my Idea even possible.
Something like, checking for the shape's color instead of the shape's coordinates itself like 40 times and getting the "active" shape's size in some kind of function. Or a way to just get every shape one by one in a loop, checking for every beginShape and endShape instance if that wouldn't be a performance nightmare.
Keep in mind that I am a beginner. I do know the basics, also of other languages, and I can apply some programming logic here and there - but since I'm not sure what processing can and can't do (yet) I'm looking for an answer to the question if this is even reasonable or possible, or not.
Any help and ideas would be much appreciated!
Thanks!

How do I get started capturing and saving Project Tango point cloud data as a mesh?

I am in a similar situation as caspertm was when asking this question: How do I export Point Cloud Data (Project Tango)?
I apologize that I cannot comment on other questions yet or I would have just done so on that question. I too was looking for the functionality the mapper app provided (specifically the capturing and saving of 3d environments) and have found through searching and reading that question that it is not available for the tablet. The answer provided to caspertm's question was to use the point cloud data sample code as a starting point and modify it to log the data to a file.
I am wondering if anyone would be willing to go into more detail about what needs to be modified to the point cloud sample (I am using the Java version) to save that data and retrieve it later on my computer so I can manipulate it in a program like blender or unity.
I am very new to the android developing process. I can read the sample point cloud java code and get a very basic understanding of what is going on, but I definitely have a lot of learning to do. I realize I am asking for a lot of help and don't expect any one person (or even several) to paint me the entire picture, but tips on things like: whether this data should be saved internally or externally, which java file requires the saving code, how to format the file to be readable in other 3d programs and how to see more than just the current snapshot of the point cloud would be greatly appreciated. If anyone could point me in the right direction of how to get the actual environment colors projected onto the cloud data, that would be amazing too, but any help or links for any of these requests would be greatly appreciated.
Thanks so much!
This answer addresses only computational geometry aspects - issues involved in getting the point cloud, phoning home with it, stuffing it in a file, etc are considered 'self evident' in order to more quickly go play with the math :-)
Nice shallow pretty answer - if you're scanning something where the point cloud represents an object with fair curvy or straight surface then the suggestions here will help -- https://blender.stackexchange.com/questions/7028/wrapping-a-mesh-around-point-cloud-with-cavities Please note that 'fair' is a loaded word.
The more detailed answer isn't pretty - and reality will have a way of handing you point clouds that make the preceding algorithms very irritated. If you are looking to take a random cloud of points (yes, I know its a meaningful cloud of points to you, but mathematicians make much of these details) and reconstruct a geometry from it, i.e. define the topology that relates those points in a meaningful way, you're talking about a very nasty problem. Check the internet for discussions of Delaunay Triangulation and Voronoi diagrams, which are the more traditional approaches to solving this issue. Sort of. Its pretty straightforward if you were scanning a model of a volcano. Assuming Tango could see it (I think probably not), scanning the Calder mobile at JFK would give pretty much anyone a drinking problem. The algorithms themselves assume a planar basis, and do not react well to fiddling with that assumption. Explaining this requires talking about manifolds, and reading between the lines in your question, I'm assuming you'd rather not have me go any further.
You should be able to find some open source implementations - if it builds and passes all of its unit tests, then you should be OK using it as a black box. If you have to reach inside, be careful. Those things bite :-)
I think I can partially answer the question:
In terms of saving the points, it should be fairly simple, you could have a file open and keep writing the points data into the file when the callback is being called. However, as Project Tango Developer website mentioned, the data provided from API is just the points, not mesh. That means after getting the points you will need to figure out your own way to construct indices.

Spritesheet or Separated PNGs?

So, I'm working on a small indie game, and for that I made my own animation system, it's pretty efficent at the moment, but I have some doubts about how it'll operate, after I add 20-30-100 more animations, because (as the title says) every single frame is a different image, in a separated folder.
So the question is: How will this work, after I add more animations? Will it cause longer loadtimes, or worse performance? I'm not totally sure, because eg. the file size of the same animation on a spritesheet and if separated, are almost the same.
The question you should ask yourself when making this sort of decision is "what is the cost of switching if I chose the simpler solution now, and need the complex solution later"?
In this case, switching involves:
Switching your animation system to use the sprite sheet instead of the individual images. This can be really easy if you make your resource fetching and animation calls clean interfaces, and is not too bad unless you do something really horrible in your code.
Getting a program that combines your individual sprites into spritesheets. A quick google search will find dozens of simple programs that do this, but if you need to write your own for some reason it shouldn't be that bad either.
So, my non-answer answer is you probably should not care and if you still care, just take an hour or two and try both.

How can I tell OpenGL how often to draw stuff?

Okay, I'm going to sound like an idiot with this one. Here goes.
I've been doing iOS development for about a year now, but only tonight have I started doing anything OpenGL related. I've followed Jeff LaMarche's wonderful guide and I'm drawing a neat looking triangle, and I got it to flip around and stuff. I'm one entertained programmer.
Okay, here's the stupid question part: How can I set somewhere for OpenGL to perform glRotatef and glDrawArrays + friends continuously, or at a set frames per second? I've tried Googling it, but really can't come up with good search terms.
Thanks in advance, and get ready to field a ton more of these questions.
While the others make good suggestions for the general case of OpenGL ES, I know that you're probably working on iOS here, Will, so there's a better platform-specific alternative. In your case, I believe you'll be better served by CADisplayLink, which fires off callbacks that are synchronized with the refresh rate of the screen. Using this, you'll get far smoother updates than with a timer or some kind of polling within a loop.
This is particularly effective when combined with Grand Central Dispatch, as I describe in my answer here. When I switched from using a loop to CADisplayLink for updates, my rendering became much smoother on all iOS devices due to fewer dropped frames. Adding GCD on top of that made things even better.
You can refer to my Molecules code for an example of this in action (see the SLSMoleculeGLViewController for how my autorotation is animated with this). Apple's OpenGL ES application template also uses CADisplayLink for updates, last I checked.
You should read up on the concept of game loops.
http://entropyinteractive.com/2011/02/game-engine-design-the-game-loop/
is a good resource to get you started.
Well I'm not an expert on the subject but can't you just put the rotate/draw commands in a while loop that ends when a certain button is pressed or when a specific event occurs ?

Bumptop - What's the relations with physics

Along with all buzz talks about the wonderful Bumptop desktop environment, Im getting this question now. What is the relation with Physics and Bumptop techniques. Basically, I am interested in learning the techniques/algorithms followed in this desktop environment. For example,
Collision Detection -- is used when one icon is about to collide on other one.
Any other known techniques?
It probably uses a quite common rigid body dynamics simulator as used in (simple/older) computer games. If you want to play with one yourself, have a look at Open Dynamics Engine.
I'd say that it uses mechanics (the branch of physics that describes motion) to determine/calculate the outcome of object interaction.
I remember seeing a very early demo of this months ago - it looks very impressive!
Well, it looks as though it's using friction and velocity as well. The friction slows the animations down - if it didn't then things would just fly out of the way. Velocity is used to have things move at speed in certain directions.
More info here
BumpTop
Extensive use of physics effects like
bumping and tossing is applied to
documents when they interact, for a
more realistic experience.

Resources