How does location cache works? - xamarin

I have an application that gets the GPS location of the devices every time the user fills-up the form. My problem is capturing the GPS location takes too long. It takes about 40 seconds to 60 seconds before the GPS has been captured. I am using jamesmontemagno's Geolocator plugin.
GPS Parameters:
Accuracy: 100 meters
Timeout: 1 minute
Here is my code that I am using right now:
var defaultgpsaccuracy = Convert.ToDouble(Preferences.Get("gpsaccuracy", String.Empty, "private_prefs"));
var defaultgpstimeout = Convert.ToDouble(Preferences.Get("gpstimeout", String.Empty, "private_prefs"));
var locator = CrossGeolocator.Current;
locator.DesiredAccuracy = defaultgpsaccuracy;
position = await locator.GetLastKnownLocationAsync();
if (position != null)
{
string location = position.Latitude + "," + position.Longitude;
lblStartLocation.Text = location;
}
else
{
position = await locator.GetPositionAsync(TimeSpan.FromMinutes(defaultgpstimeout), null, false);
string location = position.Latitude + "," + position.Longitude;
lblStartLocation.Text = location;
}
These are my questions:
I used locator.GetLastKnownLocationAsync(); how long before the location cache refreshes?
And does the last known location refreshes when there is a change of location
And does the location refreshes when the devices is outside the accuracy range for example the accuracy is 100 meters does the location cache refresh when the device is outside the 100 meters range of the last know location?

I would highly recommend looking through the documentation for that particular plugin, James Montemagno is a well known and respected developer employed my Microsoft working on the Xamarin framework, so his plugins, extensions and toolkits tend to be pretty highly optimised for use in cross-platform applications.
Looking at the documentation it's clear that trying to get the last 'known' location looks at an internally cached location data set and is not necessarily optimized for near real-time queries. However it can be used to reduce the number of actual location queries you have to do within your app.
The full snippet from the linked documentation is as follows:
public async Task<Position> GetCurrentLocation()
{
public static async Task<Position> GetCurrentPosition()
{
Position position = null;
try
{
var locator = CrossGeolocator.Current;
locator.DesiredAccuracy = 100;
position = await locator.GetLastKnownLocationAsync();
if (position != null)
{
//got a cahched position, so let's use it.
return position;
}
if (!locator.IsGeolocationAvailable || !locator.IsGeolocationEnabled)
{
//not available or enabled
return null;
}
position = await locator.GetPositionAsync(TimeSpan.FromSeconds(20), null, true);
}
catch (Exception ex)
{
Debug.WriteLine("Unable to get location: " + ex);
}
if (position == null)
return null;
var output = string.Format("Time: {0} \nLat: {1} \nLong: {2} \nAltitude: {3} \nAltitude Accuracy: {4} \nAccuracy: {5} \nHeading: {6} \nSpeed: {7}",
position.Timestamp, position.Latitude, position.Longitude,
position.Altitude, position.AltitudeAccuracy, position.Accuracy, position.Heading, position.Speed);
Debug.WriteLine(output);
return position;
}
}
This method is designed to follow a hierarchical patter of location retrieval, it starts with the last 'known' position and queries if necessary, you could also take this a step further and add in a timeout to checking for a cached location if you wanted to.
In regards to how often this data is refreshed, I'd refer you to the documentation section titled 'Background Updates'
James talks about a driving app as an example of how this works. The refresh is handled differently across Android and iOS but here's the snippet regarding Android:
For this you will want to integrate a foreground service that
subscribes to location changes and the user interface binds to. Please
read through the Xamarin.Android Services documentation
In the code example he uses he shows you how to create a 'listener' that will check for changes to location periodically. This might be a better fit for what you're trying to do depending on the purpose of your application.

Related

How do you identify questions that the bot could not answer

My organisation is starting to experiment with the Microsoft bot framework. One of the questions our enterprise architect has asked is as follows:
How do we identify questions that the bot was unable to answer?
I've checked the documentation but I'm still unclear. Can anyone elaborate on the techniques that they use to identify unanswered questions? We feel this is important as it identifies opportunities for further growth.
You can achieve this using a number of techniques. Essentially, what you are trying to do is to store any questions the Bot has not been able to provide an answer for analysis.
You can do this by using the scoring mechanism in the QnAMaker. For example, if the QnAMaker returns a score of zero, an answer doesn't exist, so we need to write that question back to storage for analysis.
You can use a number of storage solutions for this in the Azure stack, such as Application Insights, Cosmos, Blob, SharePoint Lists etc.
In the example below (code trimmed for brevity), I'm using Application Insights to store this information. I have imported the botbuilder-applicationinsights package and have created a simple custom event to capture any responses that score zero against the QnAMaker.
const {
ApplicationInsightsTelemetryClient,
ApplicationInsightsWebserverMiddleware
} = require('botbuilder-applicationinsights');
const {
MessageFactory,
CardFactory
} = require('botbuilder');
const {
QnAServiceHelper
} = require('../helpers/qnAServiceHelper');
const {
CardHelper
} = require('../helpers/cardHelper');
const {
FunctionDialogBase
} = require('./functionDialogBase');
// Setup Application Insights
settings = require('../settings').settings;
const appInsightsClient = new ApplicationInsightsTelemetryClient(settings.instrumentationKey);
class QnADialog extends FunctionDialogBase {
constructor() {
super('qnaDialog');
}
async processAsync(oldState, activity) {
var newState = null;
var query = activity.text;
var qnaResult = await QnAServiceHelper.queryQnAService(query, oldState);
var qnaAnswer = qnaResult[0].answer;
var qnaNonResponse = qnaResult[0].score;
var prompts = null;
if (qnaResult[0].context != null) {
prompts = qnaResult[0].context.prompts;
}
var outputActivity = null;
if (prompts == null || prompts.length < 1) {
outputActivity = MessageFactory.text(qnaAnswer);
} else {
var newState = {
PreviousQnaId: qnaResult[0].id,
PreviousUserQuery: query
}
outputActivity = CardHelper.GetHeroCard(qnaAnswer, prompts);
}
if (qnaNonResponse === 0) {
const {
NonResponseCard
} = require('../dialogs/non-response');
const quicknonresponseCard = CardFactory.adaptiveCard(NonResponseCard);
outputActivity = ({
attachments: [quicknonresponseCard]
});
console.log("Cannot find QnA response for" + " " + query);
appInsightsClient.trackEvent({
name: "Non-response",
properties: {
question: query
}
});
}
return ([newState, outputActivity, null]);
}
}
module.exports.QnADialog = QnADialog;
I can then hook up the query I might use in Application Insights in Power Bi to surface those non-answered questions.
There are multiple ways to achieve this, but this was one I ended up going with.
Depending of the size and the complexity of your model you will want to use LUIS or qnamaker. If your mother is very simple qnamaker will works. for something a bit more complex especially if you want to make use of entities LUIS is definitely the way to go. Each of them have their own technique and #steviebleeds describe how to do it on qnamaker. For Louis you are going to look at your confidence threshold and you should record that have below the confidence threshold you have set. each time you get a prediction from Lewis it send you a list of intent each of them having a confidence percentage on the predictions. You should assess this confidence percentage and decide depending of your fresh hold if you want or not to answer you users. You also want to look at all questions that have return none intent.

Looking for PendingResult await() equivalent in New Places SDK Client

Background: I have a List of strings which contains the different place IDs. Once a user has selected his location, I have a loop that executes and determines if each place in the list (I obtain the location from the place ID) is near his selected location. I was able to implement this with the old Places SDK but could not migrate it to the new SDK because it seems that the new SDK has no await() equivalent.
Here is my old code:
// contains a list of Offices. Has method getId() which contains the Place ID from Google.
List<Office> results = obtained from the database...
// go thru each Location and find those near the user's location
for (int i = 0; i < results.size(); i++) {
// Get the place from the placeID
PendingResult<PlaceBuffer> placeResult = Places.GeoDataApi.
getPlaceById(mGoogleApiClient, results.get(i).getId());
// wait for the result to come out (NEED EQUIVALENT IN NEW PLACES SDK)
PlaceBuffer places = placeResult.await();
// Get the latitude and longitude for the specific Location
LatLng latLng = places.get(0).getLatLng();
// Set the location object for the specific business
Location A = new Location("Business");
A.setLatitude(latLng.latitude);
A.setLongitude(latLng.longitude);
// get the distance of the business from the user's selected location
float distance = A.distanceTo(mSelectedLocation);
// if the distance is less than 50m away
if (distance < 50) { ... do something in code}
As you can see in the code above, the old PLACES SDK API has a PendingResult class with await() as one of the methods. This await() as per documentation Blocks until the task is completed.. IN SUMMARY, the code will not proceed till a result is obtained from getPlaceById.
I migrated to the new Places SDK as per documentation and I have issues. Here is my new migrated code based on the Google documentation: https://developers.google.com/places/android-sdk/client-migration#fetch_a_place_by_id
for (int i = 0; i < results.size(); i++) {
// Get the place Id
String placeId = results.get(position).getId();
// Specify the fields to return.
List<Place.Field> placeFields = Arrays.asList(Place.Field.ID, Place.Field.NAME,
Place.Field.LAT_LNG, Place.Field.ADDRESS);
// Construct a request object, passing the place ID and fields array.
FetchPlaceRequest request = FetchPlaceRequest.builder(placeId, placeFields)
.build();
// Add a listener to handle the response.
placesClient.fetchPlace(request).addOnSuccessListener((response) -> {
Place place = response.getPlace();
// Get the latitude and longitude for the specific location
LatLng latLng = place.getLatLng();
// Set the location object for the specific business
Location A = new Location("Business");
A.setLatitude(latLng.latitude);
A.setLongitude(latLng.longitude);
// get the distance of the business from the selected location
float distance = A.distanceTo(mSelectedLocation);
// if the distance is less than 50m away
if (distance < 50) { ... do something in code}
It seems that key issue here is that in the old code await() blocks the code till its successful hence the for loop does not process. However this is not the case with OnSuccessListener. As a result, with the new migrated code, the for loop proceeds and completes the loop even when fetchPlace is not yet complete getting its results for each iteration. Thus, the code is broken and is unable to get the results needed.
Is there a way to block the code to move till fetchPlace is completed?!
Any Google API task can be waited on by Google's Task API as far as I'm aware.
For example, findAutocompletePredictions returns a Task<> object. Instead of adding an onCompleteListener, you can pass that task to Tasks.await.
Instead of this non-blocking way:
OnCompleteListener<T> onCompleteListener=
new OnCompleteListener<T> {...}
placesClient.findAutocompletePredictions(f)
.addOnCompleteListener(onCompleteListener);
You could pass it on to Tasks.await() and make the API call blocking:
T results = null;
try {
// No timeout
results = Tasks.await(placesClient.findAutocompletePredictions(f));
// Optionally, with a 30 second timeout:
results = Tasks.await(
placesClient.findAutocompletePredictions(f), 30, TimeUnit.SECONDS);
} catch (ExecutionException e) {
// Catch me
} catch (TimeoutException e) {
// Catch me, only needed when a timeout is set
} catch (InterruptedException e) {
// Catch me
}
if (results != null) {
// Do something
} else {
// Do another thing
}
Basically, instead of getting a PendingResult by default, you're now given a Task<T> that you can use however.
I solved the issue by using the Task Class. See below:
for (int position = 0; position < results.size(); position++) {
// Get the placeID
String placeId = results.get(position).getAddress();
// Specify the fields to return.
List<Place.Field> placeFields = Arrays.asList(Place.Field.ID, Place.Field.NAME,
Place.Field.LAT_LNG, Place.Field.ADDRESS);
// Construct a request object, passing the place ID and fields array.
FetchPlaceRequest request = FetchPlaceRequest.builder(placeId, placeFields)
.build();
// create a FetchPlaceResponse task
Task<FetchPlaceResponse> task = placesClient.fetchPlace(request);
try {
FetchPlaceResponse response = Tasks.await(task);
Place place = response.getPlace();
// Get the latitude and longitude for the specific place
LatLng latLng = place.getLatLng();
// Set the location object for the specific business
Location A = new Location("Business");
A.setLatitude(latLng.latitude);
A.setLongitude(latLng.longitude);
// get the distance of the business from the selected location
float distance = A.distanceTo(mSelectedLocation);
These two codes will ask the system to wait for the response..
Task task = placesClient.fetchPlace(request);
FetchPlaceResponse response = Tasks.await(task);

IBM Lotus Notes Domino DLL

The Domino interop API which is included with Lotus Notes causes an out of memory exception in .NET when the NotesDXLExporter class based object fails to export the 390th record, which is a big document, after exporting 389 records (which are smaller documents).
Here is a code snippet:
I initialize the NotesDXLExporter class.
NotesDXLExporter dxl1 = null;
I then configure the NotesDXLExported object as shown below:
dxl1 = notesSession.CreateDXLExporter();
dxl1.ExitOnFirstFatalError = false;
dxl1.ConvertNotesbitmapsToGIF = true;
dxl1.OutputDOCTYPE = false;
I then perform a for a loop shown below in reading documents using the dxl1 class (line on which exception occurs is indicated below).
NotesView vincr = database.GetView(#"(AllIssuesView)"); //view from an NSF file
for (int i = 1; i < vincr.EntryCount; i++)
{
try
{
vincrdoc = vincr.GetNthDocument(i);
System.IO.File.WriteAllText(#"C:\Temp\" + i + #".txt", dxl1.Export(vincrdoc)); //OUT OF MEMORY EXCEPTION HAPPENS HERE WHEN READING A BIG DOCUMENT.
}
catch(Exception ex)
{
Console.WriteLine(ex);
}
I have tried using a different version of the Interop domino dll and had had no success.
As I understand this, I see an API issue but I dont know if I am missing something?
Can you please shed some light on this?
Thanks in advance.
Subbu
You haven't said what version of the Lotus Notes you are working with. Given the history of DXL, I would say that you should try your code on the latest version of Notes that you possibly can.
But also, I don't see any calls to recycle(). Failure to call recycle() for Domino objects causes memory to leak from the Domino back end classes, and since you are running out of memory it could be contributing to your problem. You should also not use a for loop and getNthDocument. You should use getFirstDocument and a while loop with getNextDocument. You'll get much better performance. Putting these two things together leads you to the common pattern of using a temporary document to hold the result of getNextDocument, allowing you to recycle the current document, and then assign the temp document to the current, which would be something like this (not error-checked!)
NotesView vincr = database.GetView(#"(AllIssuesView)"); //view from an NSF file
vincrdoc = vincr.getFirstDocument();
while (vincrdoc != null)
{
try {
System.IO.File.WriteAllText(#"C:\Temp\" + i + #".txt", dxl1.Export(vincrdoc));
}
catch(Exception ex)
{
Console.WriteLine(ex);
}
Document nextDoc = vincr.getNextDocument(vincrdoc);
vincrdoc.recycle();
vincrdoc = nextDoc;
}

C# WIA with Automatic Document Feeder (ADF) retuns only one page on certain scanners

I have a HP Scanjet 7000 (duplex & ADF scanner) and a HP Scanjet 5500c (only ADF) and a scanner program I'm developing which uses WIA 2.0 on Windows 7.
The problem is that the code works perfectly on the older scanner model, but on the newer one the code seems to run just fine through the first page, then fail on the second. If I step through the code around the following line;
image = (WIA.ImageFile)wiaCommonDialog.ShowTransfer(item, wiaFormatTIFF, false);
the old scanner stops and waits for another call to be made on the same reference, but the newer one just runs through all it's pages from the feeder in one continuous operation.
I notice if I'm using the default scanning program in Windows 7, the newer one returns a single .tif file which contains all the separate pages. The older one returns separate .jpg files (one for each page).
This indicates to me that the newer scanner is scanning through its whole feeder before it is ready to return a collection of images where the older one returns ONE image between each page scanned.
How can I support this behavior in code? The following is part of the relevant code which works on the older scanner model:
public static List<Image> Scan(string scannerId)
{
List<Image> images = new List<Image>();
List<String> tmp_imageList = new List<String>();
bool hasMorePages = true;
bool useAdf = true;
bool duplex = false;
int pages = 0;
string fileName = null;
string fileName_duplex = null;
WIA.DeviceManager manager = null;
WIA.Device device = null;
WIA.DeviceInfo device_infoHolder = null;
WIA.Item item = null;
WIA.ICommonDialog wiaCommonDialog = null;
manager = new WIA.DeviceManager();
// select the correct scanner using the provided scannerId parameter
foreach (WIA.DeviceInfo info in manager.DeviceInfos)
{
if (info.DeviceID == scannerId)
{
// Find scanner to connect to
device_infoHolder = info;
break;
}
}
while (hasMorePages)
{
wiaCommonDialog = new WIA.CommonDialog();
// Connect to scanner
device = device_infoHolder.Connect();
if (device.Items[1] != null)
{
item = device.Items[1] as WIA.Item;
try
{
if ((useAdf) || (duplex))
SetupADF(device, duplex); //Sets the right properties in WIA
WIA.ImageFile image = null;
WIA.ImageFile image_duplex = null;
// scan image
image = (WIA.ImageFile)wiaCommonDialog.ShowTransfer(item, wiaFormatTIFF, false);
if (duplex)
{
image_duplex = (ImageFile)wiaCommonDialog.ShowTransfer(item, wiaFormatPNG, false);
}
// save (front) image to temp file
fileName = Path.GetTempFileName();
tmp_imageList.Add(fileName);
File.Delete(fileName);
image.SaveFile(fileName);
image = null;
// add file to images list
images.Add(Image.FromFile(fileName));
if (duplex)
{
fileName_duplex = Path.GetTempFileName();
tmp_imageList.Add(fileName_duplex);
File.Delete(fileName_duplex);
image_duplex.SaveFile(fileName_duplex);
image_duplex = null;
// add file_duplex to images list
images.Add(Image.FromFile(fileName_duplex));
}
if (useAdf || duplex)
{
hasMorePages = HasMorePages(device); //Returns true if the feeder has more pages
pages++;
}
}
catch (Exception exc)
{
throw exc;
}
finally
{
wiaCommonDialog = null;
manager = null;
item = null;
device = null;
}
}
}
device = null;
return images;
}
Any help on this issue would be very much appreciated! I can't seem to find a working solution on the web. Just unanswered forum posts from people with the same problem.
we had a very similar problem and various solutions, e.g. by setting certain properties, did not help. The main problem was that the scanner (ADF) retracted all pages on startup, regardless of what was happening in the program code.
The process repeatedly led to errors, since "too much" was made before the next page was scanned. This applies in particular to the fact that another "Connect" was attempted.
For this reason, we have modified the code so that the individual pages can be read in as quickly as possible:
public List<Image> Scan(string deviceID)
{
List<Image> images = new List<Image>();
WIA.ICommonDialog wiaCommonDialog = new WIA.CommonDialog();
WIA.Device device = this.Connect(deviceID);
if (device == null)
return images;
WIA.Item item = device.Items[1] as WIA.Item;
List<WIA.ImageFile> wiaImages = new List<ImageFile>();
try
{
// scan images
do
{
WIA.ImageFile image = (WIA.ImageFile)wiaCommonDialog.ShowTransfer(item, wiaFormatJPEG, false);
wiaImages.Add(image);
} while (true);
}
catch (System.Runtime.InteropServices.COMException ex)
{
if ((uint)ex.ErrorCode != WIA_PROPERTIES.WIA_ERROR_PAPER_EMPTY)
throw ex;
}
catch (Exception ex)
{
throw ex;
}
foreach (WIA.ImageFile image in wiaImages)
this.DoImage(images, image);
return images;
}
I see you're calling a method called SetupADF, which is not shown, that presumably sets some properties on the device object. Have you tried setting WIA_DPS_PAGES (property 3096) and/or WIA_DPS_SCAN_AHEAD_PAGES (property 3094)?
I have a blog post about scanning from an ADF in Silverlight, and I believe a commenter came up against the same issue you're having. Setting WIA_DPS_PAGES to 1 fixed it for him. I ended up modifying my code's SetDeviceProperties method to set WIA_DPS_PAGES to 1 and WIA_DPS_SCAN_AHEAD_PAGES to 0.
After alot of trial and error I stumbled upon a solution which worked for reasons I'm not quite sure of. It seems like the ShowTransfer() method was unable to convert the page to .png or .tiff WHILE scanning. Setting the format to JPEG or BMP actually solved the issue for me:
image = (ImageFile)scanDialog.ShowTransfer(item, wiaFormatJPEG, false);
I think I saw somewhere on the web that this method actually returns BMP regardless of the format specified. Might be that converting the image to png or tiff is too heavy as opposed to using bmp or jpeg.
On a sidenote, I'm setting the property setting: 3088 to 0x005 (adf AND duplex mode).

How to call webservice methods in Windows Phone 7?

For connecting to webservices i wrote the following code.
WebClient wc = new WebClient();
wc.DownloadStringAsync(new Uri("http://www.Webservices.asmx"));
wc.DownloadStringCompleted += new DownloadStringCompletedEventHandler(wc_DownloadStringCompleted);
void wc_DownloadStringCompleted(object sender,DownloadStringCompletedEventArgs e)
{
Debug.WriteLine("Web service says: " + e.Result);
using (var reader = new StringReader(e.Result))
{
String str = reader.ReadToEnd();
}
}
by using above code Get the string result.But i want get the result in HTMLVisulaizer then i know the what are the methods having that webservice.then i can easily access the particular method.
Please tell me how to call a web service method in Windows phone 7?in webservice i am having 5 webmethods.how to get that and how to call the Particular webmenthod.
Please tell me thanks in advance.
#venkateswara Are you talking about obtaining a list of known WebReference methods so you know which one to call in you code? Do you not see the this of known method calls when you add the WebReference to your WP7 project? Since you will be developing the WP7 app in VS I can't see the reason you would want to do this. Even if you don't own the webservice yourself, you will need to connect to it from VS in order to add the reference to your project.
Below is the screen in VS2010 where a WebReference is added. The Operations are listed on the right.
Once added you can use the ObjectBrowser to understand how the methods should be called.
Please let me know if I have missed something from your question.
#Jason James
The first step:
You must add referent Services ,like Jason James has very detailed instructions .
step 2 :
You can open App.xaml.cs , in Functions Apps
public Apps()
{
// Global handler for uncaught exceptions.
UnhandledException += Application_UnhandledException;
// Show graphics profiling information while debugging.
if (System.Diagnostics.Debugger.IsAttached)
{
// Display the current frame rate counters.
Application.Current.Host.Settings.EnableFrameRateCounter = true;
// Show the areas of the app that are being redrawn in each frame.
//Application.Current.Host.Settings.EnableRedrawRegions = true;
// Enable non-production analysis visualization mode,
// which shows areas of a page that are being GPU accelerated with a colored overlay.
//Application.Current.Host.Settings.EnableCacheVisualization = true;
}
// You can declare objects here that you will use
//Examlpe: NameservicesReferent.(Function that returns services) = new NameservicesReferent.(Function that returns services)();
Ws_Function = new Nameservices.ServiceSoapClient();
}
step 3:
in Mainpage.xaml.cs
GlobalVariables.Ws_advertise.getLinkAdvertiseIndexCompleted += new EventHandler<advertise.getLinkAdvertiseIndexCompletedEventArgs>(Ws_advertise_getLinkAdvertiseIndexCompleted);
GlobalVariables.***NameWedservise***.getLinkAdvertiseIndexAsync("**parameters to be passed**");
step 4:
void Ws_advertise_getLinkAdvertiseIndexCompleted(object sender, advertise.getLinkAdvertiseIndexCompletedEventArgs e)
{
//function returns the results to you, the example here is an array
string[] array = null;
try
{
array = e.result;
if(array != null)
}
cath(exception ex)
{
}
finally
{
array = null;
GlobalVariables.Ws_advertise.getLinkAdvertiseIndexCompleted -= new EventHandler<advertise.getLinkAdvertiseIndexCompletedEventArgs>(Ws_advertise_getLinkAdvertiseIndexCompleted);
}
}

Resources