Image moving across screen at the Speed of Light [closed] - image

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Would it be possible to write a program that can make an image, lets say a circle the size of a dime, move across your computer screen back and forth at the speed of light having a monitor that is lets say 20 inches wide; if not, then would it be possible to make the image move across the screen at 50 or 100 mph??

Would it be possible to write a program that can make an image, lets say a circle the size of a dime, move across your computer screen back and forth at the speed of light having a monitor that is lets say 20 inches wide
The speed of light is 299,792,458 meters per second. Your screen is about half a meter across, and refreshes about 60 times a second. In a single refresh of the screen, your dime would've had to have crossed the screen five million times.
You can simulate this, though. Just draw a dime-height horizontal bar across the screen.
50 mph is 22.352 meters per second, so you're not going to see anything particularly useful at that speed either.

The speed of light is approximately 186,282 miles per second, which is 11,802,827,520 inches per second. On your 20 inch monitor it would bounce back and forth 590,141,376 times per second. We'll be generous and say that the refresh rate of your monitor is 120 Hz, meaning you'd only be seeing 1 out of every 4,917,845 bounces. This is if the dot could be drawn instantaneously as it was needed.

It wouldn't not be possible since real speed of light is hardly acheived by anything at all. Even considering a perfect screen working only with optic wire, you would still have to at least travel a distance from the computer to the screen to make and object moving. That signal would travel a distance requiring time and thus, would reduce the overall speed at which your object is traveling on the screen.
50 to 100 mph is absolutely not comparable to the speed of light. It should pose no problem to do so.

Related

how to speed up objects movement speed in opengl without dropping movement smoothness

Here's the function which is registered as display function in glutDisplayFunc()
void RenderFunction(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPointSize(5);
glBegin(GL_POINTS);
glVertex2i(0+i,0);
glEnd();
glutSwapBuffers();
i+=1;
glutPostRedisplay();
}
This way the point moves across the screen but its speed is really slow.
I can speed it up by incrementing i with greater number than 1, but then the motion doesn't seem smooth. How do I achieve higher speed?
I used to work with SFML which is made on the top of OpenGL and there the object moved really fast with move() method. So there has to be a way in OpenGL too.
In this case, there's probably not a lot you can do other than moving your point further each time you redraw. In particular, most performance improvement probably won't have any significant effect on the perceived speed in this case.
The problem is fairly simple: you're changing the location by one pixel at a time. Chances are pretty good that you have screen updating "locked" so it's happening in the conjunction with the monitor's refresh.
Assuming that's the case, with a typical monitor that refreshes at 60 Hz, you're going to get a fixed rate of your point moving 60 pixels per second. If you improve the code's efficiency, the movement speed won't change--it'll just reduce the amount of CPU time you're using to move the dot at the same speed.
Essentially the only choice to move if faster is to move more than one pixel per screen refresh. One pixel per screen refresh means 60 pixels per second, so (for example) to move across a typical HD screen (1920 dots horizontally) will take 1920 pixels/60 pixels/second = 32 seconds.
With really slow code, you might use 1% of the CPU to do that. With faster code, that might drop to some unmeasureably small amount--but either way, it's going to travel the same speed, so it'll take 32 seconds to get across the screen.
If you wanted to, you could unlock your screen updates from the screen refresh. Since you're only drawing one point, you can probably update the screen at a few thousand frames per second, so the dot would appear to move across the screen a lot faster.
In reality, however, the monitor is still only refreshing the screen at 60 Hz. When your code updates faster than that, it just means you'll produce a number of intermediate updates that never show up on the screen. As far as the pictures actually showing up on the screen go, you're just moving the point more than one pixel per screen refresh. The fact that you updated data in memory for all the intermediate points doesn't really change anything. The visible result is essentially the same as if you redrew once per screen refresh, and moved the point a lot further each time.
define model view matrix with i as translation component.
Then apply this matrix to the vertex you are defining. But yes as others are telling try to move to modern OpenGL.
glMatrixMode(GL_MODELVIEW);
glTranslatef(i, i, i);

What factors are best for image resizing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Let's say I have an image that is 3000 px wide. I know (at least I think I do) that if I downsize it to be 1500 px wide (that is, 50%), the result will be better than if I resize it to be 1499 or 1501 px wide.
I suppose that will be so regardless of the algorithm used. But I have no solid proof, and the reason I'd like to have proof is that it could help me decide less obvious cases.
For instance, reducing it to 1000 px (one third) will also presumably work ok. But what about 3/4? Is it better than 1/2? It certainly can hold more detail, but will part of it not become irretrievably fuzzy? Is there a metric for the 'incurred fuzziness' which can be offset against the actual resolution?
For instance, I suppose such a metric would clearly show 3000 -> 1501 to be worse than 3000 -> 1500, more than is gained by 1501 > 1500.
Intuitively, 1/n resizes where n is a factor of the original size would yield the best results, followed by n/m where the numbers were the lowest possible. Where the original size (both X and Y) were not a multiple of the denominator, the results are expected to be poorer, tho I have no proof of that.
These issues must have been studied by someone. People have devised all sorts of complex algorithms, they must take this somehow in consideration. But I don't even know here to ask these questions. I ask them here because I've seen related ones with good answers. Thanks for your attention and please excuse the contrived presentation.
The algorithm is key. Here's a list of common ones, from lowest quality to highest. As you get higher in quality, the exact ratio of input size to output size makes less of a difference. By the end of the list you shouldn't be able to tell the difference between resizing to 1499 or 1500.
Nearest Neighbor, i.e. keeping some pixels and dropping others.
Bilinear interpolation. This takes the 2x2 area of pixels around the point where your ideal sample would be, and calculates a new value based on how close its position is to each of the 4 pixels. It doesn't work well if you're reducing below 2:1 because it starts to resemble nearest neighbor.
Bicubic interpolation. Similar to Bilinear but using a 3x3 area with a more complex formula to get sharper results. Again not good below 2:1.
Pixel averaging. If this isn't done with an integer multiple of input to output you'll be averaging different amounts each time and the results will be uneven.
Lanczos filtering. This takes a number of pixels from the input and runs them through a modified version of Sinc that attempts to retain as much of the detail as possible while keeping the calculations tractable. The size and speed of the filter varies with the resizing ratio. It's slow, but not as slow as Sinc.
Sinc filtering. This is theoretically perfect, but it requires processing a large chunk of input for every pixel output so it's very slow. You may also notice the difference between theory and practice when you see ringing artifacts in the output.
The answer to your question:
Most important factor is choosing a good re-sizing algorithm. For example, bi-cubic interpolation will not work well if you re-size by factor > 2 and do not apply smoothing. Unfortunately there is no best algorithm. If you are using photoshop or other advanced resizing tool you may chose the algorithm. In 'Picasa' you cannot choose. Each algorithm has its downsides. Some are better for natural images, other for computer graphic generated image
Less important factor is round division. The larger the output image size the better results you will get but the file will take more megabytes. Re-scaling from 3000 pix to 1600 will give you visually better results than re-scaling to 1500.
Another factor - amount of rescales. Resizing an image from 3000 to 2000 and than to 1500 will produce slightly worse result than direct resize from 3000 to 1500. Each time you resize the image, some information is lost
Friendly advice: keep the size of your image (both height and width) divisible by 4. For example 1501 is a bad size, 1500 or 1504 is better. The reason is that some hardware deal faster with images with size divisible by 4. Though quality will not improve, your browsing experience will be better.
If you display your image on a computer screen, try to match its size to the size of your screen. Otherwise the display process will make another resampling and you will not be able to observe the true beauty of your image.
If you intend to print your image, better have a high resolution. You will need at least 300 dpi. So if you want to print it on 10 inch paper, leave it at least 3000 pixels.
The last one is obvious, but I will mention it: try to keep the original aspect ratio when you resize the image. Otherwise it will become distorted. So if you downscale it from 3000 width to 1499, then you will not be able to choose an integer for image height to keep the original aspect ratio.
JPEG compression will harm you image much more than the difference in visual quality between 1500 pix image and 1499. Keep that in mind. Even with slight compression you will not be able to see the difference in quality
As a summary - stop worrying about image exact size. Choose a modern resampling algorithm (if you can), roughly estimate the size as a trade-of between size on disk, image quality and printing paper size (if relevant).
Keep the original aspect ratio and remember that JPEG compression will harm your image much more than the difference in visual quality of different resampling algorithms or slight variation in image size.

In Processing 2, How would I make this happen? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
In Processing 2, How would I make this happen?
First left click of the mouse should display ball centered where mouse was when clicked.
Second left click of the mouse should display another ball centered where the mouse was when clicked.
Once both balls are displayed, a left click will launch 1st ball at the second ball.
When the 1st ball touches the edge of the second ball, the 1st ball should stop and the second ball should move in the same direction at the same speed and move the same amount of distance that the 1st ball moved
I won't give you code for this, as that would be too involved (and isn't the purpose of Stack Overflow anyway). However, I'll outline some of the principles you'll need, and hopefully you can go from there.
The first thing to do is keep track of the state. It sounds like your states will be:
Waiting for first click.
Waiting for second click.
Moving 1st ball.
Moving 2nd ball.
This approach is quite common, and is sometimes referred to as a Finite State Machine. Typically you'd define a constant integer for each state, and store a "current state" integer somewhere. It will be updated when you want to change/advance state.
In the main drawing loop, you'd execute different code depending on which state you're in. For example, in the third state, it will draw both balls, and keep on moving the 1st ball closer to the second 2nd, calculating the distance between them. When they touch, it moves on to the fourth state.
You'll obviously need a mouse handler to detect and handle clicks. That will store the ball positions and advance the state appropriately.
For the mathematical side of it, you'll need two things. First, you'll need to get comfortable using vector maths (specifically, normalising a vector to calculate direction, and multiplying it up to get a desired speed). Secondly, you'll need to use the Euclidean distance formula (basically just Pythagoras' theorem) to calculate the distance between the balls, determining when they're close enough to touch each other. There are loads of tutorials online for all this stuff which you may find useful.
If you get stuck on a particular issue in your coding, feel free to post a more specific question (although always bear in mind it may have been asked/answered already).

2d data matrix(barcode) detection algorithm is not giving me results [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on detection of 2d data matrix but there is a problem in detection because barcode changes its design in each product so how to detect it? can anybody help me ?
The specification of datamatrix is designed to be identified. You need to look at the code the way it is intended to be looked at. Where I'd start is that the code has a quiet zone and an "L" pattern. That is what you are looking for.
How you go about doing this depends a lot on the general parameters of the image.
The first consideration is lighting and contrast. Can you depend on having a fixed midpoint, where everthing lighter is called white and everything darker black? Or will a simple histogram give a usable midpoint? Or do shadows and uneven lighting cause a value to be called black on the sunny side of the image and the same tone white on the shadow side of an image? On a flatbed scanner it is easy to depend on good contrast, but camera phone photos are more problematic.
The next consideration is size and resolution. For a camera phone application, it is expected that in a low resolution image, a high percentage of the image will contain the barcode, while a scanner may have a lot of image and a little amount of barcode data which needs to be searched for.
Finally comes presentation. Will the barcode appear in 360 degrees of rotation? Will it be flat and level or can it be be skewed, curled and angled? Is there any concern about lens distortion?
Once you can answer the considerations, it should point to what you need to do to identify the barcode. Datamatrix has clocking marks which enable distorted codes to be read, but it is a lot more work to define distortion, so if it is not needed, you wouldn't do it.

360 degree field-of-view without stitching? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there any type of camera that can have a 360 degree field of view with just one single shot, without using any stitching algorithms and post-processing steps? Or, is it possible to have one such camera with the appropriate use of lenses and other optical components?
I think that the closest you can get to a one-shot panorama is using a mirrored ball - there are a number of resources on how to use them scattered across the web. The short version is that you set up the mirrored ball and shoot its reflection, then post-process to unwrap the image. If you shot vertically down (or up) into the ball, you would get a 360 degree view of the scene; however, owing to the shape of the mirror, the resolution will drop off as you approach the ball's horizon.
Though the mirrored ball images are cool on their own, you will most likely still want to post-process the image. I've used panotools before and can vouch for them. They have a built-in ability to remap mirrored ball images to latitude-longitude (what we're more used to seeing as panoramas).
To really get it right, you could build a custom mirror rig and do the math to remap the mirrored images to your panorama. This is e.g. how the Google Street View cars do it - they have what looks like a cone of mirrors on the car, and they post-process the image from the mirrors. Of course, this is moving heavily toward the post-processing effort, but it is a true one shot one camera 360 degree panorama.
I'm not aware of manufactured cameras for such a thing. But you can rig 2 fisheye cameras into one unit. I'm not sure why you don't want software. Unless your using a hemisphere screen, you have to correct for your screen. There are open source switchers but I like Microsoft's ICE
http://research.microsoft.com/en-us/um/redmond/groups/ivm/ice/

Resources