Neutrons through a reactor shield - probability

I have been staring at my screen now since last Thursday. I am very new to coding and am not at all sure how to go about this problem. Please any help would be greatly appreciated.
A beam of neutrons bombards a reactor's wall. Consider the motion of the neutrons as a random walk on the (x,y) plane.
The neutrons then have to comply with the following conditions:
Only four directions of motion are possible (left, right, up or down).
On the next step the neutron can not step back, but only forward, left or right.
The probability to go forward is four times more than changing a direction.
On each step the neutron looses one unit of energy.
The initial neutron energy is enough for 100 steps.
The initial neutron velocity is perpendicular to the shield.
There are three possibilities for a neutron after entering the shield : it can return to the reactor core, it can be absorbed inside the shield or it can penetrate and travel through the shield.
When a neutron re-enters the reactor or when it goes through the shield, its random path stops, regardless of its energy.
I would share the coding I have right now, but it is doing absolutely nothing and is useless to solving the problem. Thank you

Related

Line follower bot location

I am working on a line follower bot that travels on a map consist of nodes But the confusion is how to let the bot know that at which node he is standing, in other words, what approach should be taken to feed the map to the bot so that it knows every node of the map and also know which node he is at present time.
I searched over the internet a lot but that doesn't seem to be worthy.
Line followers usually do not have any map. Instead they usually have a pair of front sensors pointing downwards (usually IR photo diodes and LEDs) which detects line crossing from left and right side and the robot just turns toward the line.
Its usually done by controlling the speed of left and right motor with brightness of detected light from right and left sensor (usually without any MCU or CPU, the analog version uses just 2 comparators and power amplifier to drive motors which results in much more smooth movement instead of the zig-zag like pattern)
Better bots have also in-build algorithms to search for line if it has gaps (that usually requires CPU or MCU).
If you insist on having map then you need interface to copy it in (ISP for example) however to detect where the robot is needs to actually follow the line remembering the trajectory and compare it against map until detected trajectory corresponds to only one location and orientation in the map however you will just end up with more complex and less reliable robot that has more or less the same or worse properties than simple line follower.
Another option is to use positioning system so either there is a build in positioning system on the maze or map (can be markers or transponders or whatever) or you place your robot to predetermined position and orientation and hit reset button Or you use accelerometers and gyros to integrate the position over time however as mentioned I see no benefit in any of this for line follower. This kind of stuff is better for unknown maze solver robots (they usually uses the SONAR or also IR photodiode+LED however oriented forward and to sides instead of downwards).

Finding an algorithm that can find the shortest way to solve a one dimensional variant of the "lights out" game

I'm solving a task from an old programming competition. The task is to make a program that can find if there exists a possible solution, and what the shortest solution is, for a version of the well known game "lights out". In short, we have several lights connected. By clicking on of the lights you change the status of it, and the two adjacent lights. The goal is to activate all the lights.
In the classic version of "lights out" we are working with two dimensions, but in this version the lights are connected in a "one dimensional" string, where the "edges" are connected. Basically a circle of lights.
The number of lights can go up to 10000, so the bruteforce method I tried was obviously not good enough. It only manages to solve the versions that have a solution, and where there are under ~10 lights. Here is an example of a solveable setup. The 1's mark lights that are activated, and the 0's mark lights that are deactivated. The first line includes the number of lights in the string. If a solution doesn't excist, the program will output that it isn't possible. Remember that the edges are connected.
5
10101
Click one of the "edges" (doesn't matter which one, I clicked the left one).
01100
Click the opposite edge
11111
If a solution doesn't excist the program outputs a message. If not, it outputs the shortest solution, in this case: 2.
Could anyone help me find an algorithm?
Thanks for the help.
Suppose you knew whether in the solution (if one exists) you need to click on the first and second light.
Once we have this information, we can immediately deduce whether we need to click on the third light as this is the last choice that can affect the second light (clicking on the first light changes the last/first/second, clicking on the second light changes the first/second/third, clicking on the third light changes the second/third/fourth - but no other clicks can change the second light).
Similarly, we can then immediately deduce whether to click the fourth light, as this is the last choice that can affect the third light.
You can then work all the way round to the end to find out whether you have a consistent solution (with all lights out).
Simply try all 4 options for the first 2 switches, and pick the best scoring one.
Complexity O(4n)

Unreal Engine 4: Character collider goes through floor when crouching

I'm using the 3rd person blueprint template and I've added a custom sprint and custom crouch functionality to it.. when crouching I trigger the crouching animations according to the character speed and set the max walk speed to a low value, I can interrupt the crouch by sprinting and vice versa... I can stand up from the crouch by pressing the crouch key again or attempting to jump.
It all worked quite well, until I attempted to manipulate the capsule collider's half height according to the character's speed whenever crouch, jump, or sprint is pressed... I can see the collider working as expected, however when I try to crouch the character's feet sink into the ground and when I try to stand up again the character falls through the floor...
Any help would be greatly appreciated...
The problem is that just shrinking the half-height is probably not what you want when your character is crouching, because your collision capsule is shrinking from the top and the bottom.
So, the feet of your character start to sink into the ground and when you grow your capsule it will clip through your level and fall down due to gravity.
You have two possibilities to fix this:
Use two capsules on your character, one for crouching and one for standing and only activate the one you are using
Move the capsule down the same time you are shinking it.
The capsule needs to finish at the same point, so move it lower.

Plat former Game - A realistic path-finding algorithm

I am making a game and i have come across a hard part to implement into code. My game is a tile-bases platformer with lots of enemies chasing you. basically, in theory, I want my enemies to be able to, every frame/second/2 seconds, find the realistic, and shortest path to my player. I originally thought of A-star as a solution, but it leads the enemies to paths that defy gravity, which is not good. Also, multiple enemies will be using it every second to get the latest path, and then walk the first few tiles of it. So they will be discarding the rest of the path every second, and just following the first few tiles of it. I know this seems like a lot, to calculate a new path every second, all at the same time, if their is more than one enemy, but I don't know any other way to achieve what i want.
This is a picture of what I want:
Explanation: The green figure is the player, the red one is an enemy. the grey tiles are regular, open, nothing there tiles, the brown tiles being ones that you can stand on. And finally the highlighted yellow tiles represents the path that i want my enemy to be able to find, in order to realistically get to the player.
SO, the question is: What realistic path-finding algorithm can i use to acquire this? While keeping it fast?
EDIT*
I updated the picture to represent the most complicated map that their could be. this map represents what the player of my game actually sees, they just use WASD and can move around and they see themselves move through this 2d plat-former view. Their will be different types of enemies, all with different speeds and jump heights. but all will have enough jump height and speed to make the jumps in this map, and maneuver through it. The maps are generated by simply reading an XML file that has the level data in it. the data is then parsed and different types of tiles are placed in the tile holding sprite, acording to what the XML says. EX( XML node: (type="reg" graphic="grass2" x="5" y="7") and so the x and y are multiplied by the constant gridSize (like 30 or something) and they are placed down accordingly. The enemies get their frame-by-frame instruction from an AI class attached to them. This class is responsible for producing this path and return the first direction to the enemy, this should only happen every second or so, so that the enemies don't follow a old, wrong path. Please let me know if you understand my concept, and you have some thought/ideas or maybe even the answer that i'm looking for.
ALSO: the physics in this game is separate from the pathfinding, they work just fine, using a AABB vs AABB concept (the player and enemies also being AABBs).
The trick with using A* here is how you link tiles together to form available paths. Take for example the first gap the red player would need to cross. The 'link' to the next platform (aka brown tile to the left) is actually a jump action, not a move action. Additionally, it's up to you to determine how the nodes connect together; I'd add a heavy penalty when moving from a gray tile over a brown tile to a gray tile with nothing underneath just for starters (without discouraging jumps that open a shortcut).
There are two routes I see personally: running a quick prediction of how far the player can jump and where they'd jump and adjusting how the algorithm determines node adjacency or accept the path and determine when parts of the path "hang" in the air (no brown tile immediately below) and animate the enemy 'jumping' to the next part of the path. The trick is handling things when the enemy may pass through brown tiles in the even the path isn't a parabola.
I am not versed in either solution; just something I've thought about.
You need to give us the most complicated case of map, player and enemy behaviour (including jumping up and across speed) that you are going to either automatically create or manually create so we can give relevant advice. The given map is so simple, put the map in an 2-dimensional array and then the initial player location as an element of that map and then first test whether lower number column on the same row is occupied by brown if not put player there and repeat until false then same row higher column and so on to move enemy.
Update: from my reading of the stage generation- its sometime you create- not semi-random.
My suggestion is the enemy creates clones of itself with its same AI but invisible and each clone starts going in different direction jump up/left/right/jump diagonal right/left and every time it succeeds it creates a new clone- basically a genetic algorithm. From the map it seems an enemy never need to evaluate one path over another just one way fails to get closer to the player's initial position and other doesn't.

Looking for ways for a robot to locate itself in the house

I am hacking a vacuum cleaner robot to control it with a microcontroller (Arduino). I want to make it more efficient when cleaning a room. For now, it just go straight and turn when it hits something.
But I have trouble finding the best algorithm or method to use to know its position in the room. I am looking for an idea that stays cheap (less than $100) and not to complex (one that don't require a PhD thesis in computer vision). I can add some discrete markers in the room if necessary.
Right now, my robot has:
One webcam
Three proximity sensors (around 1 meter range)
Compass (no used for now)
Wi-Fi
Its speed can vary if the battery is full or nearly empty
A netbook Eee PC is embedded on the robot
Do you have any idea for doing this? Does any standard method exist for these kind of problems?
Note: if this question belongs on another website, please move it, I couldn't find a better place than Stack Overflow.
The problem of figuring out a robot's position in its environment is called localization. Computer science researchers have been trying to solve this problem for many years, with limited success. One problem is that you need reasonably good sensory input to figure out where you are, and sensory input from webcams (i.e. computer vision) is far from a solved problem.
If that didn't scare you off: one of the approaches to localization that I find easiest to understand is particle filtering. The idea goes something like this:
You keep track of a bunch of particles, each of which represents one possible location in the environment.
Each particle also has an associated probability that tells you how confident you are that the particle really represents your true location in the environment.
When you start off, all of these particles might be distributed uniformly throughout your environment and be given equal probabilities. Here the robot is gray and the particles are green.
When your robot moves, you move each particle. You might also degrade each particle's probability to represent the uncertainty in how the motors actually move the robot.
When your robot observes something (e.g. a landmark seen with the webcam, a wifi signal, etc.) you can increase the probability of particles that agree with that observation.
You might also want to periodically replace the lowest-probability particles with new particles based on observations.
To decide where the robot actually is, you can either use the particle with the highest probability, the highest-probability cluster, the weighted average of all particles, etc.
If you search around a bit, you'll find plenty of examples: e.g. a video of a robot using particle filtering to determine its location in a small room.
Particle filtering is nice because it's pretty easy to understand. That makes implementing and tweaking it a little less difficult. There are other similar techniques (like Kalman filters) that are arguably more theoretically sound but can be harder to get your head around.
A QR Code poster in each room would not only make an interesting Modern art piece, but would be relatively easy to spot with the camera!
If you can place some markers in the room, using the camera could be an option. If 2 known markers have an angular displacement (left to right) then the camera and the markers lie on a circle whose radius is related to the measured angle between the markers. I don't recall the formula right off, but the arc segment (on that circle) between the markers will be twice the angle you see. If you have the markers at known height and the camera is at a fixed angle of inclination, you can compute the distance to the markers. Either of these methods alone can nail down your position given enough markers. Using both will help do it with fewer markers.
Unfortunately, those methods are imperfect due to measurement errors. You get around this by using a Kalman estimator to incorporate multiple noisy measurements to arrive at a good position estimate - you can then feed in some dead reckoning information (which is also imperfect) to refine it further. This part is goes pretty deep into math, but I'd say it's a requirement to do a great job at what you're attempting. You can do OK without it, but if you want an optimal solution (in terms of best position estimate for given input) there is no better way. If you actually want a career in autonomous robotics, this will play large in your future. (
Once you can determine your position you can cover the room in any pattern you'd like. Keep using the bump sensor to help construct a map of obstacles and then you'll need to devise a way to scan incorporating the obstacles.
Not sure if you've got the math background yet, but here is the book:
http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C
This doesn't replace the accepted answer (which is great, thanks!) but I might recommend getting a Kinect and use that instead of your webcam, either through Microsoft's recently released official drivers or using the hacked drivers if your EeePC doesn't have Windows 7 (presumably it does not).
That way the positioning will be improved by the 3D vision. Observing landmarks will now tell you how far away the landmark is, and not just where in the visual field that landmark is located.
Regardless, the accepted answer doesn't really address how to pick out landmarks in the visual field, and simply assumes that you can. While the Kinect drivers may already have feature detection included (I'm not sure) you can also use OpenCV for detecting features in the image.
One solution would be to use a strategy similar to "flood fill" (wikipedia). To get the controller to accurately perform sweeps, it needs a sense of distance. You can calibrate your bot using the proximity sensors: e.g. run motor for 1 sec = xx change in proximity. With that info, you can move your bot for an exact distance, and continue sweeping the room using flood fill.
Assuming you are not looking for a generalised solution, you may actually know the room's shape, size, potential obstacle locations, etc. When the bot exists the factory there is no info about its future operating environment, which kind of forces it to be inefficient from the outset.
If that's you case, you can hardcode that info, and then use basic measurements (ie. rotary encoders on wheels + compass) to precisely figure out its location in the room/house. No need for wifi triangulation or crazy sensor setups in my opinion. At least for a start.
Ever considered GPS? Every position on earth has a unique GPS coordinates - with resolution of 1 to 3 metres, and doing differential GPS you can go down to sub-10 cm range - more info here:
http://en.wikipedia.org/wiki/Global_Positioning_System
And Arduino does have lots of options of GPS-modules:
http://www.arduino.cc/playground/Tutorials/GPS
After you have collected all the key coordinates points of the house, you can then write the routine for the arduino to move the robot from point to point (as collected above) - assuming it will do all those obstacles avoidance stuff.
More information can be found here:
http://www.google.com/search?q=GPS+localization+robots&num=100
And inside the list I found this - specifically for your case: Arduino + GPS + localization:
http://www.youtube.com/watch?v=u7evnfTAVyM
I was thinking about this problem too. But I don't understand why you can't just triangulate? Have two or three beacons (e.g. IR LEDs of different frequencies) and a IR rotating sensor 'eye' on a servo. You could then get an almost constant fix on your position. I expect the accuracy would be in low cm range and it would be cheap. You can then map anything you bump into easily.
Maybe you could also use any interruption in the beacon beams to plot objects that are quite far from the robot too.
You have a camera you said ? Did you consider looking at the ceiling ? There is little chance that two rooms have identical dimensions, so you can identify in which room you are, position in the room can be computed from angular distance to the borders of the ceiling and direction can probably be extracted by the position of doors.
This will require some image processing but the vacuum cleaner moving slowly to be efficiently cleaning will have enough time to compute.
Good luck !
Use Ultra Sonic Sensor HC-SR04 or similar.
As above told sense the walls distance from robot with sensors and room part with QR code.
When your are near to a wall turn 90 degree and move as width of your robot and again turn 90deg( i.e. 90 deg left turn) and again move your robot I think it will help :)

Resources