google earth kml animatedupdate multiple changes - animation

I cannot figure out how to apply multiple animatedupdate 'tour steps' in KML. Please help. All data is linked here to demonstrate the problem. In this example.. first open the link to the rail model - this has the snap shot that takes you (the user) to the project area. Zoom in on the northern most rail curve. Then open the other KMZ - named DOC02 ... DOC02 provides the animation (Tour) ... play the tour (part 2) and ... the hopper goes a short way around the curve, although the tour data has enough steps to almost complete the entire curve - Why are only the first 3 animated updates applied? ... At first I tried to use the guides referred to by countless other stack overflow posts on this topic. But that result was even worse. The 3d model has a location ID of t1 and an orientation ID of t2. When I use these as target ID in the animated updates - Yuck! the animation is totally incorrect, model goes directly to the end of the curve. So. I added ID's at every transform change. This idea provided better animation but the animation only uses the first 2 animated updates. If all this text just gives people a headache, just let me know and I will provide a video ... I probably will make one later today anyway. Thanks for your patience.
overhead rail model in KMZ Google Earth
3d hopper model set go move around curve in KMZ Google Earth

https://developers.google.com/kml/documentation/touring#gxanimatedupdate-and-the-tour-timeline
The above link will take you to the kml documentation page which specifically states that:
"Animated updates run parallel to the timeline - that is, the tour continues directly to the next tour primitive in the playlist, while the animated update is taking place. The gx:duration controls the length of time it takes for the update to occur, but doesn't delay the next tour primitive. To allow an animated update to complete before the next action takes place, insert a gx:Wait, with a duration equal to the duration of the update, between the animated update and the following tour primitive. In addition, an animated update will be truncated if its duration extends beyond that of the last gx:FlyTo or gx:Wait element. You can either change the appropriate gx:duration values, or insert an additional gx:Wait element at the end of the playlist to give the animated update time to complete.
Which essentially is telling you that AnimatedUpdate will not keep the tour going. Only flyTo and Wait will prolong the tour. The reason why your tour ends after the first 2 animations is because that is all the animations it can get through within the tour length.
Basically, to fix your issue, insert a gx:wait with a duration >= your AnimatedUpdate's duration after EACH AnimatedUpdate. This will prolong the tour and allow your animations time to complete before the tour ends. (because the gx:wait prolongs the tour)
(Alternatively, add a long enough Wait at the beginning, much like you did with the end.)
Also see: https://developers.google.com/kml/documentation/touring#tour-timelines

Related

Remove background and get deer as a fore ground?

I want to remove background and get deer as a foreground image.
This is my source image captured by trail camera:
This is what I want to get. This output image can be a binary image or RGB.
I worked on it and try many methods to get solution but every time it failed at specific point. So please first understand what is my exact problem.
Image are captured by a trail camera and camera is motion detector. when deer come in front of camera it capture image.
Scene mode change with respect to weather changing or day and night etc. So I can't use frame difference or some thing like this.
Segmentation may be not work correctly because Foreground (deer) and Background have same color in many cases.
If anyone still have any ambiguity in my question then please first ask me to clear and then answer, it will be appreciated.
Thanks in advance.
Here's what I would do:
As was commented to your question, you can detect the dear and then perform grabcut to segment it from the picture.
To detect the dear, I would couple a classifier with a sliding window approach. That would mean that you'll have a classifier that given a patch (can be a large patch) in the image, output's a score of how much that patch is similar to a dear. The sliding window approach means that you loop on the window size and then loop on the window location. For each position of the window in the image, you should apply the classifier on that window and get a score of how much that window "looks like" a dear. Once you've done that, threshold all the scores to get the "best windows", i.e. the windows that are most similar to a dear. The rational behind this is that if we a dear is present at some location in the image, the classifier will output a high score at all windows that are close/overlap with the actual dear location. We would like to merge all that locations to a single location. That can be done by applying the functions groupRectangles from OpenCV:
http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html#grouprectangles
Take a look at some face detection example from OpenCV, it basically does the same (sliding window + classifier) where the classifier is a Haar cascade.
Now, I didn't mention what that "dear classifier" can be. You can use HOG+SVM (which are both included in OpenCV) or use a much powerful approach of running a deep convulutional neural network (deep CNN). Luckily, you don't need to train a deep CNN. You can use the following packages with their "off the shelf" ImageNet networks (which are very powerful and might even be able to identify a dear without further training):
Decaf- which can be used only for research purposes:
https://github.com/UCB-ICSI-Vision-Group/decaf-release/
Or Caffe - which is BSD licensed:
http://caffe.berkeleyvision.org/
There are other packages of which you can read about here:
http://deeplearning.net/software_links/
The most common ones are Theano, Cuda ConvNet's and OverFeat (but that's really opinion based, you should chose the best package from the list that I linked to).
The "off the shelf" ImageNet network were trained on roughly 10M images from 1000 categories. If those categories contain "dear", that you can just use them as is. If not, you can use them to extract features (as a 4096 dimensional vector in the case of Decaf) and train a classifier on positive and negative images to build a "dear classifier".
Now, once you detected the dear, meaning you have a bounding box around it, you can apply grabcut:
http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html
You'll need an initial scribble on the dear to perform grabcu. You can just take a horizontal line in the middle of the bounding box and hope that it will be on the dear's torso. More elaborate approaches would be to find the symmetry axis of the dear and use that as a scribble, but you would have to google, research an implement some method to extract symmetry axis from the image.
That's about it. Not straightforward, but so is the problem.
Please let me know if you have any questions.
Try OpenCV Background Substraction with Mixture of Gaussians models. They should be adaptable enough for your scenes. Of course, the final performance will depend on the scenario, but it is worth trying.
Since you just want to separate the background from the foreground I think you do not need to recognize the deer. You need to recognize an object in motion in the scene. You just need to separate what is static in a significant interval of time (background) from what is not static: the deer.
There are algorithms that combine multiple frames from the same scene in order to determine the background, like THIS ONE.
You mentioned that the scene mode changes with respect to weather changing or day and night considering photos of different deers.
You could implement a solution when motion is detected, instead of taking a single photo, it could take a few ones with some interval of time.
This interval has to be long as to get the deer in different positions or out of the scene and at the same time short enough to not be much affected by scene variations. Perhaps you need to deal with some brightness variation, but I think it is feasible to determine the background using these frames and finally segment the deer in the "motion frame".

Plat former Game - A realistic path-finding algorithm

I am making a game and i have come across a hard part to implement into code. My game is a tile-bases platformer with lots of enemies chasing you. basically, in theory, I want my enemies to be able to, every frame/second/2 seconds, find the realistic, and shortest path to my player. I originally thought of A-star as a solution, but it leads the enemies to paths that defy gravity, which is not good. Also, multiple enemies will be using it every second to get the latest path, and then walk the first few tiles of it. So they will be discarding the rest of the path every second, and just following the first few tiles of it. I know this seems like a lot, to calculate a new path every second, all at the same time, if their is more than one enemy, but I don't know any other way to achieve what i want.
This is a picture of what I want:
Explanation: The green figure is the player, the red one is an enemy. the grey tiles are regular, open, nothing there tiles, the brown tiles being ones that you can stand on. And finally the highlighted yellow tiles represents the path that i want my enemy to be able to find, in order to realistically get to the player.
SO, the question is: What realistic path-finding algorithm can i use to acquire this? While keeping it fast?
EDIT*
I updated the picture to represent the most complicated map that their could be. this map represents what the player of my game actually sees, they just use WASD and can move around and they see themselves move through this 2d plat-former view. Their will be different types of enemies, all with different speeds and jump heights. but all will have enough jump height and speed to make the jumps in this map, and maneuver through it. The maps are generated by simply reading an XML file that has the level data in it. the data is then parsed and different types of tiles are placed in the tile holding sprite, acording to what the XML says. EX( XML node: (type="reg" graphic="grass2" x="5" y="7") and so the x and y are multiplied by the constant gridSize (like 30 or something) and they are placed down accordingly. The enemies get their frame-by-frame instruction from an AI class attached to them. This class is responsible for producing this path and return the first direction to the enemy, this should only happen every second or so, so that the enemies don't follow a old, wrong path. Please let me know if you understand my concept, and you have some thought/ideas or maybe even the answer that i'm looking for.
ALSO: the physics in this game is separate from the pathfinding, they work just fine, using a AABB vs AABB concept (the player and enemies also being AABBs).
The trick with using A* here is how you link tiles together to form available paths. Take for example the first gap the red player would need to cross. The 'link' to the next platform (aka brown tile to the left) is actually a jump action, not a move action. Additionally, it's up to you to determine how the nodes connect together; I'd add a heavy penalty when moving from a gray tile over a brown tile to a gray tile with nothing underneath just for starters (without discouraging jumps that open a shortcut).
There are two routes I see personally: running a quick prediction of how far the player can jump and where they'd jump and adjusting how the algorithm determines node adjacency or accept the path and determine when parts of the path "hang" in the air (no brown tile immediately below) and animate the enemy 'jumping' to the next part of the path. The trick is handling things when the enemy may pass through brown tiles in the even the path isn't a parabola.
I am not versed in either solution; just something I've thought about.
You need to give us the most complicated case of map, player and enemy behaviour (including jumping up and across speed) that you are going to either automatically create or manually create so we can give relevant advice. The given map is so simple, put the map in an 2-dimensional array and then the initial player location as an element of that map and then first test whether lower number column on the same row is occupied by brown if not put player there and repeat until false then same row higher column and so on to move enemy.
Update: from my reading of the stage generation- its sometime you create- not semi-random.
My suggestion is the enemy creates clones of itself with its same AI but invisible and each clone starts going in different direction jump up/left/right/jump diagonal right/left and every time it succeeds it creates a new clone- basically a genetic algorithm. From the map it seems an enemy never need to evaluate one path over another just one way fails to get closer to the player's initial position and other doesn't.

What is the best way of making concurrent animations?

this is an algorithm/data structure question about making different animations at the same time. For example, a ball is falling down one pixel in a millisecond, a bullet is moving 5 pixels in a ms, and a man is moving 1 pixel in 20 milliseconds. And think that there are hundreds of them together. What is the best way of putting all animations together, moving what we need to move in one function call, and removing the ones whose animation is completed? I don't want to create a thread for each one. What I want to do is to create one thread moving all items and sleeping until an object needs to be moved.
Note: I'm using Java/Swing, printing objects and images in JPanel.
I recently did something similar in Python. I don't know if this is the best method, but here's what I did.
Create an abstract Event class with the following public interface:
tick - calculates how much time has passed since the last tick. Perform work proportional to that time span. This should be called frequently to create the illusion of smooth movement; maybe sixteen times a second or so.
isDone - returns true when the Event has finished occuring.
Make a subclass of Event for anything that takes more than one frame to finish. Rotating, scaling, color changing, etc. You might create a TweenEvent subclass of Event if you want to move an image from one part of the screen to another. During each tick, redraw the image in a position farther away from the original position, and farther towards the destination position.
You can run many Events concurrently, like so:
Array<Event> events = new Array<Event>();
//add a bunch of TweenEvents here - one for a bullet, one for a ball, etc.
while(True){
Sleep(1/16);
for(Event e in events){
e.tick();
if (e.isDone()){events.remove(e);}
}
}

What is an algorithm I can use to program an image compare routine to detect changes (like a person coming into the frame of a web cam)?

I have a web cam that takes a picture every N seconds. This gives me a collection of images of the same scene over time. I want to process that collection of images as they are created to identify events like someone entering into the frame, or something else large happening. I will be comparing images that are adjacent in time and fixed in space - the same scene at different moments of time.
I want a reasonably sophisticated approach. For example, naive approaches fail for outdoor applications. If you count the number of pixels that change, for example, or the percentage of the picture that has a different color or grayscale value, that will give false positive reports every time the sun goes behind a cloud or the wind shakes a tree.
I want to be able to positively detect a truck parking in the scene, for example, while ignoring lighting changes from sun/cloud transitions, etc.
I've done a number of searches, and found a few survey papers (Radke et al, for example) but nothing that actually gives algorithms that I can put into a program I can write.
Use color spectroanalisys, without luminance: when the Sun goes down for a while, you will get similar result, colors does not change (too much).
Don't go for big changes, but quick changes. If the luminance of the image changes -10% during 10 min, it means the usual evening effect. But when the change is -5%, 0, +5% within seconds, its a quick change.
Don't forget to adjust the reference values.
Split the image to smaller regions. Then, when all the regions change same way, you know, it's a global change, like an eclypse or what, but if only one region's parameters are changing, then something happens there.
Use masks to create smart regions. If you're watching a street, filter out the sky, the trees (blown by wind), etc. You may set up different trigger values for different regions. The regions should overlap.
A special case of the region is the line. A line (a narrow region) contains less and more homogeneous pixels than a flat area. Mark, say, a green fence, it's easy to detect wheter someone crosses it, it makes bigger change in the line than in a flat area.
If you can, change the IRL world. Repaint the fence to a strange color to create a color spectrum, which can be identified easier. Paint tags to the floor and wall, which can be OCRed by the program, so you can detect wheter something hides it.
I believe you are looking for Template Matching
Also i would suggest you to look on to Open CV
We had to contend with many of these issues in our interactive installations. It's tough to not get false positives without being able to control some of your environment (sounds like you will have some degree of control). In the end we looked at combining some techniques and we created an open piece of software named OpenTSPS (Open Toolkit for Sensing People in Spaces - http://www.opentsps.com). You can look at the C++ source in github (https://github.com/labatrockwell/openTSPS/).
We use ‘progressive background relearn’ to adjust to the changing background over time. Progressive relearning is particularly useful in variable lighting conditions – e.g. if lighting in a space changes from day to night. This in combination with blob detection works pretty well and the only way we have found to improve is to use 3D cameras like the kinect which cast out IR and measure it.
There are other algorithms that might be relevant, like SURF (http://achuwilson.wordpress.com/2011/08/05/object-detection-using-surf-in-opencv-part-1/ and http://en.wikipedia.org/wiki/SURF) but I don't think it will help in your situation unless you know exactly the type of thing you are looking for in the image.
Sounds like a fun project. Best of luck.
The problem you are trying to solve is very interesting indeed!
I think that you would need to attack it in parts:
As you already pointed out, a sudden change in illumination can be problematic. This is an indicator that you probably need to achieve some sort of illumination-invariant representation of the images you are trying to analyze.
There are plenty of techniques lying around, one I have found very useful for illumination invariance (applied to face recognition) is DoG filtering (Difference of Gaussians)
The idea is that you first convert the image to gray-scale. Then you generate two blurred versions of this image by applying a gaussian filter, one a little bit more blurry than the first one. (you could use a 1.0 sigma and a 2.0 sigma in a gaussian filter respectively) Then you subtract from the less-blury image, the pixel intensities of the more-blurry image. This operation enhances edges and produces a similar image regardless of strong illumination intensity variations. These steps can be very easily performed using OpenCV (as others have stated). This technique has been applied and documented here.
This paper adds an extra step involving contrast equalization, In my experience this is only needed if you want to obtain "visible" images from the DoG operation (pixel values tend to be very low after the DoG filter and are veiwed as black rectangles onscreen), and performing a histogram equalization is an acceptable substitution if you want to be able to see the effect of the DoG filter.
Once you have illumination-invariant images you could focus on the detection part. If your problem can afford having a static camera that can be trained for a certain amount of time, then you could use a strategy similar to alarm motion detectors. Most of them work with an average thermal image - basically they record the average temperature of the "pixels" of a room view, and trigger an alarm when the heat signature varies greatly from one "frame" to the next. Here you wouldn't be working with temperatures, but with average, light-normalized pixel values. This would allow you to build up with time which areas of the image tend to have movement (e.g. the leaves of a tree in a windy environment), and which areas are fairly stable in the image. Then you could trigger an alarm when a large number of pixles already flagged as stable have a strong variation from one frame to the next one.
If you can't afford training your camera view, then I would suggest you take a look at the TLD tracker of Zdenek Kalal. His research is focused on object tracking with a single frame as training. You could probably use the semistatic view of the camera (with no foreign objects present) as a starting point for the tracker and flag a detection when the TLD tracker (a grid of points where local motion flow is estimated using the Lucas-Kanade algorithm) fails to track a large amount of gridpoints from one frame to the next. This scenario would probably allow even a panning camera to work as the algorithm is very resilient to motion disturbances.
Hope this pointers are of some help. Good Luck and enjoy the journey! =D
Use one of the standard measures like Mean Squared Error, for eg. to find out the difference between two consecutive images. If the MSE is beyond a certain threshold, you know that there is some motion.
Also read about Motion Estimation.
if you know that the image will remain reletivly static I would reccomend:
1) look into neural networks. you can use them to learn what defines someone within the image or what is a non-something in the image.
2) look into motion detection algorithms, they are used all over the place.
3) is you camera capable of thermal imaging? if so it may be worthwile to look for hotspots in the images. There may be existing algorithms to turn your webcam into a thermal imager.

Time delays and Model View Controller

I am implementing a turn based game, there are two sides and each side has several units, at each specific moment only one unit can move across the board.
Since only one unit can move at a time, after i figure out where it should go, as far as the simulation is concerned it Can instantly be teleported there, but playing the game you would want to see the unit moving so that you realise who moved and where he went.
The question is, would you put the movement algorithm (eg interpolating between 2 points in N seconds) in the model and then have the view show the unit in the interpolated position without even knowing that it is moving, or teleport the unit and notify the view that it should show the unit moving as best as it wants.
If you would take the second approach, how would you keep the simulation from running too far ahead of the view, would you put the view in command of resuming the simulation after the movement ended?
Thanks in advance, Xtapodi.
Ah, yet another example that reminds us that MVC was never originally designed for real-time graphics. ;)
I would store the current position and the previous position in the model. When the object moves, the current position is copied into the previous position, the new position is copied into the current position, and a notification is sent to the view that the model has changed. The view can then interpolate between the old and the new position accordingly. It can speed up, slow down, or even remove the interpolation entirely based on the specific view settings, without requiring any extra data to be stored within the model.
Rather than storing the current position and the previous position, you could instead just store the last move with each unit, and the move itself contains the previous position. This is probably more versatile if you ever need to store extra information about a move.
What you probably want is to have the unit image move each frame. How far to move the image each frame is similar to your interpolation.
unitsPerSecond = totalUnits / (framesPerSecond * totalSeconds)
So if I want to move an image from position 0 to position 60 in 2 seconds and my framerate is 30, I need to move 60 units in 60 frames, therefore my speed is 1. So each frame, I move the image 1 unit, and if moving the unit will take me beyond my destination, simply set my location to my destination.

Resources