Find neighbour LLDP - access points participating in a mesh network - snmp

Is it possible to find the neighbour in a wireless mesh network? If there is an AP say AP1 which is connected to a switch and also another AP say AP2 to form a mesh then is it possible to find the neighbour of AP1 (which is AP2) if both access points support LLDP?

It is possible however not supported by all vendors, the motorola and HP are the only access points I'm interested in. By looking at their docs I think the models that my company uses have LLDP support to discover neighbours even on a mesh link.

Related

Beacon heading calucation

I am planning to create the indoor navigation system. It will provide the shortest path to user from source to destination.
Suppose I know all beacons location.
Is that possible to calculate user facing direction by a set of beacons?
Sorry, this is not possible.
Off the shelf beacons are omnidirectional transmitters and mobile phones are omnidirectional receivers. There is no way to determine directionality of the signal.
The good news is that phones have two other ways of getting heading information -- the compass and a velocity vector. The compass is the general approach if you are standing still. If you are in motion, just calculate a vector between the last known location and the current location to get the heading.

DJI Mobile SDK - how is distance between adjacent waypoints calculated?

Using the DJI Mobile SDK to upload Waypoint Missions, if two adjacent waypoints are determined by the DJI to be too close (within 0.5 meters), the upload is rejected.
Does anyone know the algorithm used to determine the distance between adjacent waypoints in a waypoint mission?
Specifically, is the DJI algorithm using a haversine calculation for distance between lat, lon coordinates and if so, what is the earth radius used? Is it the UIGG mean radius: 6371008.8 meters. Or some other radius?
Or does it use the ellipsoidal Vincenty formula (WGS-84)?
This information would be useful for more precise waypoint decimation prior to mission upload.
First off, I would comment that DJI answering an internal implementation question is very unlikely since it would expose them to having to support the implementation over time and across aircraft. Different aircraft, different technologies may result in varying implementations.
What has always worked for me it to use standard "distance between points" calculations, either common from map formulas or as built into platform SDK (iOS, Android, etc.) I have found these to be sufficiently accurate enough to plan even complex flights.
Based on several tests I can now confirm that the current DJI internal distance computation is dependent on latitude and/or longitude. Meaning that you will get different results for the same (!) waypoint distance pair depending on where your two points are anchored.
A 1.5-meter distance Waypoint Pair got accepted as a mission in a location in central Europe but were rejected with WAYPOINT_DISTANCE_TOO_CLOSE for a location in the central US.
(We verified with https://gps-coordinates.org/distance-between-coordinates.php that both waypoint distance pairs had the same 1.5 meter distance between them.)
So it's safe to assume that DJI has a bug in their distance calculation.

Location Without GPS and without Network(Using satellite)

Is there any way to achieve users current location using satellite without GPS(Internet) and without SIM Network.
GPS does not require internet, it requires a gps chip and a processing chip.
The GPS is multiple satellites (what you are asking for) that transmit signals to the ground, these satellite orbit the earth in the LEO (low earth orbit) and there are enough of them to cover the whole globe.
We need internet to fetch the map tiles, street names, traffic and navigation utilities, not the user location.
So using the signal we get from these satellites can be processed into the location of the user with accuracy depending on how many satellites you are listening to (minimum of 3).

How to map a house layout, room by room to be used for simple room to room navigation by a robot?

I am planning on a robot, basically an Arduino coupled with a webcam and RC car to navigate from a point in the house to another using a map of the house layout made possibly by a webcam tour of the place.
It should receive a command to where it should go based on input from my smartphone or PC. Each room will have an ID code which the robot should use to determine the travel path.
Also, it should be able to go to the room where I am based on locating me using Bluetooth or Wifi.
Sensors: Proximity sensors and light sensors
I live in the house, so that is not an issue.
Any ideas on where I can start?
I participated in a similar project, it will be more difficult than you think now.
We used bluetooth beacons. Fix their positions, then you can measure the signal strength with the robot. If you know the positions of the beacon (they are fix), then you can calculate where is the robot actually. But they are very inaccurate, and takes a couple of seconds to scan all the beacons.
If you want to navigate through your house, I think the easiest way that you plant the beacons, go around in the house with the robot and measure the signals (the more the better). This way you can create a discrete layout of your house. In my opinion the easiest way to store the map if you represent the layout as a graph. The nodes are the discrete points you measured, and there exists an edge between two nodes if the robot can travel between them in "one step". This way you can represent temporary obstacles too, for example delete an edge. And the robot can easily determine which way to go, just use Dijsktra's algorithm.

Making 3D representation of an object with a webcam

Is it possible to make a 3D representation of an object by capturing many different angles using a webcam? If it is, how is it possible and how is the image-processing done?
My plan is to make a 3D representation of a person using a webcam, then from the 3D representation, i will be able to tell the person's vital statistics.
As Bart said (but did not post as an actual answer) this is entirely possible.
The research topic you are interested in is often called multi view stereo or something similar.
The basic idea resolves around using point correspondences between two (or more) images and then try to find the best matching camera positions. When the positions are found you can use stereo algorithms to back project the image points into a 3D coordinate system and form a point cloud.
From that point cloud you can then further process it to get the measurements you are looking for.
If you are completely new to the subject you have some fascinating reading to look forward to!
Bart proposed Multiple view geometry by Hartley and Zisserman, which is a very nice book indeed.
As Bart and Kigurai pointed out, this process has been studied under the title of "stereo" or "multi-view stereo" techniques. To be able to get a 3D model from a set of pictures, you need to do the following:
a) You need to know the "internal" parameters of a camera. This includes the focal length of the camera, the principal point of the image and account for radial distortion in the image.
b) You also need to know the position and orientation of each camera with respect to each other or a "world" co-ordinate system. This is called the "pose" of the camera.
There are algorithms to perform (a) and (b) which are described in Hartley and Zisserman's "Multiple View Geometry" book. Alternatively, you can use Noah Snavely's "Bundler" http://phototour.cs.washington.edu/bundler/ software to also do the same thing in a very robust manner.
Once you have the camera parameters, you essentially know how a 3D point (X,Y,Z) in the world maps to an image co-ordinate (u,v) on the photo. You also know how to map an image co-ordinate to the world. You can create a dense point cloud by searching for a match for each pixel on one photo in a photo taken from a different view-point. This requires a two-dimensional search. You can simplify this procedure by making the search 1-dimensional. This is called "rectification". You essentially take two photos and transform then so that their rows correspond to the same line in the world (simplified statement). Now you only have to search along image rows.
An algorithm for this can be also found in Hartley and Zisserman.
Finally, you need to do the matching based on some measure. There is a lot of literature out there on "stereo matching". Another word used is "disparity estimation". This is basically searching for the match of pixel (u,v) on one photo to its match (u, v') on the other photo. Once you have the match, the difference between them can be used to map back to a 3D point.
You can use Yasutaka Furukawa's "CMVS" or "PMVS2" software to do this. Or if you want to experiment by yourself, openCV is a open-source computer vision toolbox to do many of the sub-tasks required for this.
This can be done with two webcams in the same ways your eyes work. It is called stereoscopic vision.
Have a look at this:
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
An affordable alternative to get 3D data would be the Kinect camera system.
Maybe not the answer you are hoping for but Microsoft's Kinect is doing that exact thing, there are some open source drivers out there that allow you to connect it to your windows/linux box.

Resources