I am writing a windows mobile application based on GPS technology. everything is ready but I need the function which is used to retrieve GPS location coordinates (latitude and longitude).
I declared latitude and longitude, but now I need the function of GPS to put it inside a button I created, get longitude and latitude values and save them.
Environement:
Windows Mobile 6.5
Framework 6.5, Professional
Thanks
The GPS Intermediate Driver provides a very simple-to-use API for providing shared access to GPS data. You can use GPSGetPosition() for you purpose. There is an example how to use it in this article, Using the GPS Intermediate Driver from Native Code.
See Also:
GPS Programming Tips for Windows Mobile
Use GPS And Web Maps For Location-Aware Apps
GPS Intermediate Driver for Windows Mobile
GPS Intermediate Driver
Using the GPS Intermediate Driver from Managed Code
Related
Tango is developed by google which has api that used for motion tracking on mobile devices. I was wondering if it could be applied to stand alone java application without android (for java-SE). If not then I was wondering are there any api out there are similar to tango where it tracks motion and depth perceptions.
I am trying to capture the motion data from a video, not camera/web cam. If this was possible at all.
Googles Tango API is only compatible with Tango enabled devices only. So it does not work on all mobile devices only devices that are Tango enabled. If you try to use the API with a device that is not Tango Enabled it wont work.
I think you should research a bit into OpenCV its an Open Source Computer Vision Library that is compatible with Java and many other languages. It lets you analyze videos without the need for that many sensors (like Raw Depth Sensors which are primarily used on Tango enabled Devices).
The Tango API is only available on Tango-enabled devices, which there aren't that many of. That being said, it is possible to create your own motion-tracking and depth-sensitive app with standard Java.
For motion-tracking all you need is a accelerometer and gyroscope, which most phones come equipped with nowadays as standard. All you basically then do is integrate those readings over time and you should have an idea of the device's position and orientation. Note that the accuracy will depend on your hardware and implementation, but be ready for it to be fairly inaccurate thanks to sensor drift and integration errors (see the answer here).
Depth-perception is more complex and would depend on your hardware setup. I'd recommend you look into the excellent OpenCV library which has Java bindings for you already and make sure you have a good grasp on the basics of computer vision (calibration, camera matrix, pinhole model, etc.). The first two answers in this SO question should get you started on how to go about determining depth using a single camera.
I'm developing an app that uses device sensors to determine user x-axis rotations and y-axis pitch (essentially the user spins in a circle and looks up at the sky or down at the ground). I've developed this app for a phone using the android Sensor.getRotationMatrix and Sensor.getOrientation functions and then using the first two resulting orientation values. I've now moved my app to a Project Tango tablet and these values no longer seem to be valid. I've looked into PT a bit and it seems that this measures things in Quarternions. Does this mean that Project Tango is not meant to implement the Android SDK?
The Project Tango APIs (which are for Android only) and the Android SDK are both required to build Project Tango apps. The Tango APIs offer higher level interfaces to Android device sensors than the Android SDK's direct access to sensors state - Tango APIs combine sensors states to deliver more complete "pose" (6 degrees of freedom position and orientation) state, as well as 3D (X, Y, depth) scene points and even feature recognition in scenes, etc. The crucial benefit of the Tango APIs is syncing several different sensors very precisely in realtime so the pose state is very accurate; indeed, the latest Tango devices support that sync inside the CPU circuitry itself. An app collecting that data from sensors using the (non-Tango) Android SDK APIs will not be fast enough to correlate the sensors as through the Tango APIs. So perhaps you're getting sensor data that's not synced, which sows as offsets.
Also, a known bug in the Tango APIs is that the device's compass sensor is returning garbage values. I don't know if that bug affects the quality of data returned by the Android SDK's calls directly to the compass. But the Android SDK's calls to the compass are going to return state at least somewhat out of sync with the state returned by the Tango API calls.
In theory, the Android SDK should still be working, so your app should work without any change, but it won't get advantage of the improvements given by the Project Tango.
To get the advantages of Tango (fisheye camera for improved motion tracking...), you need to use the Tango API to activate the Tango Service and then yes, use the pose in quaternions.
I know that in the constructor of the GeoCoordinateWatcher object there is the possibility to specify the accuracy (default or high), but for my university project I need to know more.
My professor asked me to search and specify also the algorithm or the heuristics used by the GeoCoordinateWatcher to choose his source.
I'm already aware of the MSDN article which says
Although the Location Service uses multiple sources of location information, and any of the sources may not be available at any given time (for example, no GPS satellites or cell phone towers may be accessible), the native code layer handles the work of evaluating the available data and choosing the best set of sources. All your application needs to do is to choose between high accuracy or the default, power-optimized setting. You can set this value when you initialize the main Location Service class, GeoCoordinateWatcher.
but I need to know more exactly how the native code layer handles the evaluation of the source.
Anyone can help me with this or point me to some more detailed article?
If you take a look into the source code of the System.Device assembly (by using a decompiler like dotPeek), you can see how it works.
In fact the GeoCoordinateWatcher is just a small wrapper that creates a COM object of type ILocation. This interface is part of the Location API, that Microsoft introduced with Windows 7. This itself is a part of the Sensor API, that also started with Windows 7.
If you dig a little bit through this documenation, you'll find this introduction article, which describes how this API works. One sentence within this introductions is:
Sensor manufacturers can create device drivers to connect sensors with
Windows 7. Sensor device drivers are implemented by using the Windows
Portable Devices (WPD) driver model, which is based on the Windows
User Mode Driver Framework (UMDF).Many device drivers have been
written by using these frameworks.
So the manufacturers of GPS devices will provide a windows driver that will be installed on a system. This driver will announce itself as a location device to the system.
When you create a GeoCoordinateWatcher it asks through the location api for the desired data. The operation system checks which drivers have announced itself for being capable and starts these drivers. These drivers will then open the connection to the device, reading the data and forward it to the desired consumers.
I am new to WP7 development and working on a project (wp7 app) where I need to get ONLY gravity force using accleration API (I think I can do some thing using Motion Api) but It requires window phone to support Compass and Gyroscope as well.So Is there any way to separate gravity from accleration or only get gravity forces on X, Y and Z axis using only accelration (as I want my app to run on wp devices where there is no Compass and Gyroscope).
Also in android there are some methods likes
Linear Acceleration
Low pass / high pass filters etc
Do we have such kind of support in Wp7?
Thanks
The phone itself doesn't know what force is caused by acceleration and what by gravity. You would need information from other sensors to be able to do the math to separate the values. That's what Motion API is for.
So, your only chance is to use Motion API. It will fall gracefully if the device doesn't have the necessary sensors, but will work if there are:
The Motion API used by this sample requires all of the supported
Windows Phone sensors, and therefore these sample applications will
fail gracefully but will not work properly on devices without the
necessary sensors or on the device emulator.
There was a post on the Windows Phone Team blog about implementing a High pass / low pass filter on the accelerometer data. I've used this with fairly good results.
I know that windows phone 7 has 5 sensors: A-GPS, Accelerometer, Compass, Light, Proximity and microphone, WiFi, Bluetooth, Camera, etc.
I can access GPS, Accelerometer, Microphone and Camera. But I cannot find APIs for accessing the raw data of compass, light, proximity, WiFi and Bluetooth.
What I am in need now is to scan the WiFi frequently and get the nearly-by access points IDs. Is that possible?
Thanks.
You haven't found the API for these functions because they aren't exposed in the SDK at the moment.
Compass was pulled before CTP for being just slightly below quality expectations, light and proximity aren't exposed at the moment, neither is bluetooth via the 3rd Party SDK.
Data is accessible via Wifi and Wifi contributes to the Location service. However you can't access low level wifi network data at the moment.